The Synopse mORMot Framework Source Code is licensed under GPL / LGPL / MPL licensing terms, free to be included in any application.
The Synopse mORMot Framework Documentation is a free document, released under a GPL 3.0 License, distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Trademark Notice
Rather than indicating every occurrence of a trademarked name as such, this document uses the names only in an editorial fashion and to the benefit of the trademark owner with no intention of infringement of the trademark.
Prepared by:
Title:
Signature:
Date
Arnaud Bouchez
Project Manager
Document Purpose
The Software Architecture Design document purpose is to describe the implications of each software requirement specification on all the affected software modules for the Synopse mORMot Framework project.
The whole Software documentation process follows the typical steps of this diagram:
Design Inputs, FMEA and Risk Specifications
Purpose
This Software Architecture Design (SAD) document applies to the 1.18 release of the Synopse mORMot Framework library.
After a deep presentation of the framework architecture and main features, each source code unit is detailed, with clear diagrams and tables showing the dependencies between the units, and the class hierarchy of the objects implemented within.
The SynFile main demo is presented on its own, and can be used as a general User Guide of its basic ORM features and User Interface generation - see below.
At the end of this document, Software Requirements Specifications (SWRS) document items are linked directly to the class or function involved with the Software Design Document (SDD) document, from the source code.
Responsibilities
Support is available in the project forum - https://synopse.info/forum - from the mORMot Open Source community;
Tickets can be created in a public Tracker web site located at https://synopse.info/fossil , which publishes also the latest version of the project source code;
Synopse can provide additional support, expertise or enhancements, on request;
Synopse work on the framework is distributed without any warranty, according to the chosen license terms - see below;
This documentation is released under the GPL (GNU General Public License) terms, without any warranty of any kind.
GNU General Public License
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
1. Synopse mORMot Overview
Meet the mORMotSynopse mORMot is an Open Source Client-Server ORM SOA MVC framework for Delphi 6 up to the latest available Delphi version and FPC 3.2, targeting Win/Linux for the server, and any platform for clients (including mobile or AJAX).
The main features of mORMot are therefore:
ORM/ODM: objects persistence on almost any database (SQL or NoSQL);
SOA: organize your business logic into REST services;
Clients: consume your data or services from any platform, via ORM classes or SOA interfaces;
Web MVC: publish your ORM/SOA process as responsive Web Applications.
With local or remote access, via an auto-configuring Client-Server REST design.
General mORMot architecturemORMot offers all features needed for building any kind of modern software project, with state-of-the-art integrated software components, designed for both completeness and complementarity, offering convention over configuration solutions, and implemented for speed and efficiency.
For storing some data, you define a class, and the framework will take care of everything: routing, JSON marshalling, table creation, SQL generation, validation.
For creating a service, you define an interface and a class, and you are done. Of course, the same ORM/ODM or SOA methods will run on both server and client sides: code once, use everywhere!
For building a MVC web site, write a Controller class in Delphi, then some HTML Views using Mustache templates, leveraging the same ORM/ODM or SOA methods as Model.
If you need a HTTP server, a proxy redirection, master/slave replication, publish-subscribe, a test, a mock, add security, define users or manage rights, a script engine, a report, User Interface, switch to XML format or publish HTML dynamic pages - just pick up the right class or method. If you need a tool or feature, it is probably already there, waiting for you to use it.
The table content of this document makes it clear: this is no ordinary piece of software.
The mORMot framework provides an Open Source self-sufficient set of units (even Delphi starter edition is enough) for creating any Multi-tier application, up to the most complex Domain-Driven design - see below:
Presentation layer featuring MVC UI generation with i18n and reporting for rich Delphi clients, Mustache-based templates for web views - see below - or rich AJAX clients;
Application layer implementing Service Oriented Architecture via interface-based services (like WCF) and Client-Server ORM - following a RESTful model using JSON over several communication protocols (e.g. HTTP/1.1 and HTTPS);
Domain Model layer handling all the needed business logic in plain Delphi objects, including high-level managed types like dynamic arrays or records for Value Objects, dedicated classes for Entities or Aggregates, and variant storage with late-binding for dynamic documents - your business logic may also be completed in JavaScript on the server side as stated below;
Data persistence infrastructure layer with ORM persistence on direct Oracle, MS SQL, OleDB, ODBC, Zeos connection or any DB.pas provider (e.g. NexusDB, DBExpress, FireDAC, AnyDAC, UniDAC...), with a powerful SQLite3 kernel, and direct SQL access if needed - including SQL auto-generation for SQLite3, Oracle, Jet/MSAccess, MS SQL, Firebird, DB2, PostgreSQL, MySQL, Informix and NexusDB - the ORM is also able to use NoSQL engines via a native MongoDB connection, for ODM persistence;
Cross-Cutting infrastructure layers for handling data filtering and validation, security, session, cache, logging and testing (framework uses test-driven approach and features stubbing and mocking).
If you do not know some of those concepts, don't worry: this document will detail them - see below.
With mORMot, ORM is not used only for data persistence of objects in databases (like in other implementations), but as part of a global n-Tier, Service Oriented Architecture (SOA), ready to implement Domain-Driven solutions. mORMot is not another ORM on which a transmission layer has been added, like almost everything existing in Delphi, C# or Java: this is a full Client-Server ORM/SOA from the ground up. This really makes the difference.
The business logic of your applications will be easily exposed as Services, and will be accessible from light clients (written in Delphi or any other mean, including AJAX).
The framework Core is non-visual: it provides only a set of classes to be used from code. But you have also some UI units available (including screen auto-creation, reporting and ribbon GUI), and you can use it from any RAD, web, or AJAX clients.
No dependency is needed at the client side (no DB driver, or third-party runtime): it is able to connect via standard HTTP or HTTPS, even through a corporate proxy or a VPN. Rich Delphi clients can be deployed just by copying and running a stand-alone small executable, with no installation process. Client authentication is performed via several secure methods, and communication can be encrypted via HTTS or with a proprietary SHA/AES-256 algorithm. SOA endpoints are configured automatically for each published interface on both server and client sides, and creating a load-balancing proxy is a matter of one method call. Changing from one database engine to another is just a matter of one line of code; full audit-trail history is available, if needed, to track all changes of any class persisted by the ORM/ODM.
Cross-platform clients can be easily created, as Win32 and Win64 executables of course, but also for any platform supported by the Delphi compiler (including Mac OSX, iPhone/iPad and Android), or by FreePascal / Lazarus. AJAX applications can easily be created via Smart Mobile Studio, as will any mobile operating system be accessible as an HTML5 web rich client or stand-alone PhoneGap application, ready to be added to the Windows, Apple or Google store. See below for how mORMot client code generation leverages all platforms.
Speed and scalability has been implemented from the ground up - see below: a genuine optimized multi-threaded core let a single server handle more than 50,000 concurrent clients, faster than DataSnap, WCF or node.js, and our rich SOA design is able to implement both vertical and horizontal scalable hosting, using recognized enterprise-level SQL or NoSQL databases for storage.
In short, with mORMot, your ROI is maximized.
1.1. Client-Server ORM/SOA framework
The Synopse mORMot framework implements a Client-Server RESTful architecture, trying to follow some MVC, N-Tier, ORM, SOA best-practice patterns - see below.
Several clients, can access to the same remote or local server, using diverse communication protocols:
General mORMot architecture - Client / ServerOr the application can be stand-alone:
General mORMot architecture - Stand-alone applicationSwitch from this embedded architecture to the Client-Server one is just a matter of how mORMot classes are initialized. For instance, the very same executable can even be running as a stand-alone application, a server, or a client, depending on some run-time parameters!
1.2. Highlights
At first, some points can be highlighted, which make this framework distinct to other available solutions:
Client-Server orientation, with optimized request caching and intelligent update over a RESTful architecture - but can be used in stand-alone applications;
No RAD components, but true ORM and SOA approach;
Multi-Tier architecture, with integrated Business rules as fast ORM-based classes and Domain-Driven design;
Service-Oriented-Architecture model, using custom RESTful JSON services - you can send as JSON any TStrings, TCollection, TPersistent or TObject (via registration of a custom serializer) instance, or even a dynamic array, or any record content, with integrated JSON serialization, via an interface-based contract shared on both client and server sides;
Truly RESTful authentication with a dual security model (session + per-query);
Very fast JSON producer and parser, with caching at SQL level;
Fast a configuration-less HTTP / HTTPS server using http.sys kernel-mode server - but may communicate via named pipes, Windows Messages or in-process as lighter alternatives;
Using SQLite3 as its kernel, but able to connect to any other database (via OleDB / ODBC / Zeos or direct client library access e.g. for Oracle) - the SynDB.pas classes are self-sufficient, and do not depend on the DelphiDB.pas unit nor any third-party (so even the Delphi Starter edition is enough) - but the SynDBDataset unit is also available to access any DB.pas based solution (e.g. NexusDB, DBExpress, FireDAC, AnyDAC, UniDAC or even the BDE...);
RESTful ORM access to a NoSQL database engine like MongoDB with the same code base;
Ability to use SQL and RESTful requests over multiple databases at once (thanks to SQLite3 unique Virtual Tables mechanism);
Full Text Search engine included, with enhanced Google-like ranking algorithm;
Server-side JavaScript engine, for defining your business intelligence;
Direct User Interface generation: grids are created on the fly, together with a modern Ribbon ('Office 2007'-like) screen layout - the code just has to define actions, and assign them to the tables, in order to construct the whole interface from a few lines of code, without any IDE usage;
Integrated Reporting system, which could serve complex PDF reports from your application;
Designed to be as fast as possible (asm used when needed, buffered reading and writing avoid most memory consumption, multi-thread ready architecture...) so benchmarks sound impressive when compared to other solutions - see below;
More than 1800 pages of documentation;
Delphi, FreePascal, mobile and AJAX clients can share the same server, and ORM/SOA client access code can be generated on request for any kind of application - see below;
Full source code provided - so you can enhance it to fulfill any need;
Works from Delphi 6 up to the latest available Delphi version and FPC 3.2.x, truly Unicode (uses UTF-8 encoding in its kernel, just like JSON), with any version of Delphi (no need to upgrade your IDE).
1.3. Benefits
As you can see from the previous section, mORMot provides a comprehensive set of features that can help you to manage your crosscutting concerns though a reusable set of components and core functionality.
Meet the mORMotOf course, like many developers, you may suffer from the well-known NIH ("Not Invented Here") syndrome. On the other side, it is a commonly accepted fact that the use of standard and proven code libraries and components can save development time, minimize costs, reduce the use of precious test resources, and decrease the overall maintenance effort.
Benefits of mORMot are therefore:
KISS convention over configuration design: you have all needed features at hand, but with only one way of doing it - less configuration and less confusion for the developer and its customers;
Pascal oriented: implementation is not following existing Java or C# patterns (with generics (ab)use, variable syntaxes and black-box approach), but try to unleash the object pascal genius;
Integrated: all crosscutting scenarios are coupled, so you benefit of consisting APIs and documentation, a lot of code-reuse, JSON/RESTful orientation from the ground up;
Tested: most of the framework is test-driven, and all regression tests are provided, including system-wide integration tests;
Do-not-reinvent-the-wheel, since we did it for you: it is now time to focus on your business;
Open Source, documented and maintained: project is developed since years, with some active members - mORMot won't leave you soon!
1.4. Legacy code and existing projects
Even if mORMot will be more easily used in a project designed from scratch, it fits very well the purpose of evolving any existing Delphi project, or creating the server side part of an AJAX application. One benefit of such a framework is to facilitate the transition from a traditional Client-Server architecture to a N-Tier layered pattern.
Due to its modular design, you can integrate some framework bricks to your existing application:
You may add logging to your code - see below, to track unresolved issues, and add customer-side performance profiling;
Use low-level classes like record or dynamic array wrappers - see below, or our dynamic document storage via variant - see below, including JSON or binary persistence;
You can use the direct DB layers, including the TQuery emulation class - see below - to replace some BDE queries, or introduce nice unique features like direct database access or array binding for very fast data insertion - see below, or switch to a NoSQL database - see below;
Reports could benefit of the mORMotReport.pas code-based system, which is very easy to use even on the server side (serving PDF files), when your business logic heavily relies on objects, not direct DB - see below;
HTTP requests may be made available using Client-Server services via methods - see below, e.g. for rendering HTML pages generated on the fly with Mustache templates- see below, pictures or PDF reports;
You can little by little move your logic out of the client side code into some server services defined via interfaces, without the overhead of SOAP or WCF - see below; migration to SOA is the main benefit of mORMot for existing projects;
Make your application ready to offer a RESTful interface, e.g. for consuming JSON content via AJAX or mobile clients - see below;
New tables may be defined via the ORM/ODM features of mORMot, still hosted in your external SQL server - see below, as any previous data; in particular, mixed pure-ORM and regular-SQL requests may coexist; or mORMot's data modeling may balance your storage among several servers (and technologies, like NoSQL);
Sharing the same tables between legacy code SQL and mORMot ORM is possible, but to avoid consistency problems, you should better follow some rules detailed below;
You may benefit from our very fast in-memory engine, a dedicated SQLite3-based consolidation database or even the caching features - see below, shared on the server side, when performance is needed - it may help integrating some CQRS pattern (Command Query Responsibility Segregation) into your application via a RESTful interface, and delegate some queries from your main database;
If you are still using an old version of Delphi, and can't easily move up due to some third party components or existing code base, mORMot will offer all the needed features to start ORM, N-Tier and SOA, starting with a Delphi 6 edition.
mORMot implements the needed techniques for introducing what Michael Feathers calls, in his book Working Effectively With Legacy Code, a seam. A seam is an area where you can start to cleave off some legacy code and begin to introduce changes. Even mocking abilities of mORMot - see below - will help you in this delicate task - see http://www.infoq.com/articles/Utilizing-Logging
Do not forget that Synopse, as a company, is able to offer dedicated audit and support for such a migration. The sooner, the better.
1.5. FAQ
Before you start going any further, we propose here below a simple FAQ containing the most frequent questions we received on our forums.
First of all, take a look at the keyword index available at the very beginning of this document. The underlined entries target the main article(s) about a given concept or technical term.
Feel free to give your feedback at https://synopse.info/forum asking new questions or improving answers!
Should I use mORMot 1 since mORMot 2 is alive and appear to be more maintained? You are right, mORMot 2 is the way to go for any new project: mORMot 1 is considered in a bug-fix-only state. For an existing mORMot 1 project, we will continue to fix the identified bugs and supply the SQLite3 static updates, but no new feature or enhancement will appear any more on this branch. Consider migrating your project to mORMot 2 as soon as you have a little time. It is not a complex process, since most of the code is compatible, once you change to the new units for your whole project.
Your SAD doc is too long to read through in a short period. Too much documentation can kill the documentation! But you do not need to read the whole document: most of it is a detailed description of every unit, object, or class. But the first part is worth reading, otherwise you are very likely to miss some main concepts or patterns. It just takes 15-30 minutes! Also read below to find out in which direction you may need to go for writing your server code. Consider the slides available at https://drive.google.com/folderview?id=0B0r8u-FwvxWdeVJVZnBhSEpKYkE
Where should I start? Take a look at the Architecture principlesbelow, then download and install the sources below, then compile and run the TestSQL3.dpr program. Check about ORM below, SOA below and MVC below, then test the various samples (from the SQLite3\Samples folder), especially 01, 02, 04, 11, 12, 14, 17, 26, 28, 30 and the MainDemo.
So far, I can see your mORMot fits most of the requirement, but seems only for Database Client-Server apps. First of all, the framework is a set of bricks, so you can use it e.g. to build interface based services, even with no database at all. We tried to make its main features modular and uncoupled.
I am not a great fan of ORM, sorry, I still like SQL and have some experience of that. Some times sophisticated SQL query is hard to change to ORM code. ORM can make development much easier; but you can use e.g. interface-based services and "manual" SQL statements - in this case, you have at hand below classes in mORMot, which allow very high performance and direct export to JSON.
I am tempted by using an ORM, but mORMot forces you to inherit from a root TSQLRecord type, whereas I'd like to use any kind of object. We will discuss this in details below. Adding attributes to an existing class is tempting, but will pollute your code at the end, mixing persistence and business logic: see Persistence Ignorance and Aggregatesbelow. The framework proposes a second level of Object mapping, allowing to persist any kind of PODO (Plain Old Delphi Object), by defining CQRS services - see below.
I would like to replace pieces of Delphi-code by using mORMot and the DDD-concept in a huge system, but its legacy database doesn't have integer primary keys, and mORMot ORM expects a TID-like field. By design, such legacy tables are not compatible with SQLite3 virtual tables, or our ORM - unless you add an ID integer additional primary key, which may not be the best idea. Some hints: write a persistence service as interface/class (as required by DDD - see below); uncouple persistence and SOA services (i.e. the SOA TSQLRestServer is a TSQLRestServerFullMemory and not a DB/ORM TSQLRestServerDB); reuse your existing SQL statements, with SynDB as access layer if possible (you will have better performance, and direct JSON support); use the ORM for MicroService local persistence (with SQLite3), and/or for new tables in your legacy DB (or another storage, e.g. MongoDB).
Why are you not using the latest features of the compiler, like generics or class attributes? Our framework does not rely on generics, but on the power of the object pascal type system: specifying a class or interface type as parameter is safe and efficient - and generics tends to blow the executable size, lower down performance (the current RTL is not very optimized, and sometimes bugged), and hide implementation details. Some methods are available for newer version of the compiler, introducing access via generics; but it was not mandatory to depend on them. We also identified, as several Java or C# gurus, that class attributes may sound like a good idea, but tend to pollute the code, and introduce unexpected coupling. Last but not least, those features are incompatible with older version of Delphi we would like to support, and may reduce compatibility with FPC.
I also notice in your SAD doc, data types are different from Delphi. You have RawUTF8, etc, which make me puzzled, what are they? You can for sure use standard Delphistring types, but some more optimized types were defined: since the whole framework is UTF-8 based, we defined a dedicated type, which works with all versions of Delphi, before and after Delphi 2009. By the way, just search for RawUTF8 in the keyword index of this document, or see below.
During my tests, my client receives non standard JSON with unquoted fields. Internally, the framework uses JSON in MongoDBextended syntax, i.e. fields are not quoted - this gives better performance and reduces memory and bandwidth with a mORMot client. To receive "field":value instead of field:value, just add a proper User-Agent HTTP header to the client request (as any browser does), and the server will emit standard JSON.
I encounter strange issues about indexes or collations with external SQLite3 tools. By default, our ORM uses its proprietary SYSTENOCASE collation, which is perfect for Win1252 accents, but unknown outside of mORMot. Use our SynDbExplorer tool instead. Or use a standard collation when defining a new ORM table as stated below.
When I work with floating points and JSON, sometimes numerical values with more than 4 decimals are converted into JSON strings. By default, double values are disabled in the JSON serialization, to avoid any hidden precision lost during conversion: see below how to enable it.
I got an access violation with SynDB ISQLDBRows. You need to explicitly release the ISQLDBRows instance, by setting it to nil, before freeing the owner's connection - see below.
Deadlock occurs with interface callbacks. When working with asynchronous notifications over WebSockets, you need to ensure you won't fire directly a callback from a main method execution - see below for several solutions.
All the objects seem non-VCL components, meaning need code each property and remember them all well. This is indeed... a feature. The framework is not RAD, but fully object-oriented. Thanks to the Delphi IDE, you can access all properties description via auto-completion and/or code navigation. We tried to make the documentation exhaustive and accurate. Then you can still use RAD for UI design, but let business be abstracted in pure code. See e.g. the mORMotVCL.pas unit which can publish any ORM result as TDataSource for your UI.
I know you have joined the DataSnap performance discussion and your performance won good reputation there. If I want to use your framework to replace my old project of DataSnap, how easy will it be? If you used DataSnap to build method-based services, translation into mORMot will be just a matter of code refactoring. And you will benefit of new features like Interface-based services - see below - which is much more advanced than the method-based pattern, and will avoid generating the client class via a wizard, and offers additional features - see below or below.
What is the SMS? Do you know any advantage compared to JQuery or AngularJS? Smart Mobile Studio is an IDE and some source runtime able to develop and compile an Object-Pascal project into a HTML 5 / CSS 3 / JavaScriptembedded application, i.e. able to work stand alone with no remote server. When used with mORMot on the server side, you can use the very same object pascal language on both server and client sides, with strong typing and true OOP design. Then you feature secure authentication and JSON communication, with connected or off-line mode. Your SmartPascal client code can be generated by your mORMot server, as stated below. We currently focus on TMS Web Core integration, which seems a newer - and more supported - alternative.
I am trying to search a substitute solution to WebSnap. Do you have any sample or doc to describe how to build a robust web Server? You can indeed easily create a modern MVC / MVVM scaling Web Application. Your mORMot server can easily publish its ORM / SOA business logic as Model, use Mustache logic-less templates rendering - see below - for Views, and defining the ViewModel / Controller as regular Delphi methods. See below for more details, and discovering a sample "blog" application.
Have you considered using a popular source coding host like Github or BitBucket? We love to host our own source code repository, and find fossil a perfect match for our needs, with a friendly approach. But we created a parallel repository on GitHub, so that you may be able to monitor or fork our projects - see https://github.com/synopse/mORMot Note that you can get a daily snapshot of our official source code repository directly from https://synopse.info/files/mORMotNightlyBuild.zip
Why is this framework named mORMot? - Because its initial identifier was "Synopse SQLite3 database framework", which may induce a SQLite3-only library, whereas the framework is now able to connect to any database engine; - Because we like mountains, and those large ground rodents; - Because marmots do hibernate, just like our precious objects; - Because marmots are highly social and use loud whistles to communicate with one another, just like our applications are designed not to be isolated; - Because even if they eat greens, they use to fight at Spring for their realm; - Because it may be an acronym for "Manage Object Relational Mapping Over Territory", or whatever you may think of...
2. Architecture principles
Adopt a mORMotThis framework tries to implement some "best-practice" patterns, among them:
All those points render possible any project implementation, up to complex Domain-Driven design - see below.
2.1. General design
A general design of the mORMot architecture is shown in the following diagram:
General mORMot architecture - Client Server implementationIn addition, you may use the following transversal features:
General mORMot architecture - Cross-Cutting featuresDon't be afraid. Such a drawing may sound huge and confusing, especially when you have a RAD background, and did not work much with modern design patterns.
Following pages will detail and explain how the framework implements this architecture, and sample code is available to help you discovering the amazing mORMot realm.
In the previous diagram, you can already identify some key concepts of mORMot:
Cross-Platform, multi clients, and multi devices;
Can integrate to an existing code base or architecture;
Client-Server RESTful design;
Layered (multi-tier) implementation;
Process can be defined via a set of Services (SOA);
Business rules and data model are shared by Clients and Server;
Data is mapped by objects (ORM/ODM);
Databases can be an embedded SQLite3, one or several standard RDBMS (with auto-generated SQL), a MongoDB NoSQL engine, fast in-memory objects lists, or another mORMot server;
Security (authentication and authorization) is integrated to all layers;
User interface and reporting classes are available;
You can write a MVC/MVVM AJAX or Web Application from your ORM/SOA methods;
Based on simple and proven patterns (REST, JSON, MVC, SOLID);
A consistent testing and debugging API is integrated;
Optimized for both scaling and stability.
2.2. Architecture Design Process
First point is to state that you can't talk about architecture in isolation. Architecture is always driven by the actual needs of the application, not by whatever the architect read about last night and is dying to see how it works in the real world. There is no such "one architecture fits all" nor "one framework fits all" solution. Architecture is just a thinking of how you are building your own software.
In fact, software architecture is not about theory and diagrams, nor just about best practice, but about a way of implementing a working solution for your customers.
Architecture Iterative Process (SCRUM)This diagram presents how Architecture is part of a typical SCRUM iterative agile process. Even if some people of your company may be in charge of global software architecture, or even if your project managements follows a classic V-cycle and does not follow the agile manifesto, architecture should never be seen as a set of rules, to be applied by every and each developers. Architecture is part of the coding, but not all the coding.
Here are some ways of achieving weak design:
Let each developer decides, from his/her own knowledge (and mood?), how to implement the use cases, with no review, implementation documentation, nor peer collaboration;
Let each team decides, from its own knowledge (and untold internal leadership?), how to implement the use cases, with no system-wide collaboration;
Let architecture be decided at so high level that it won't affect the actual coding style of the developers (just don't be caught);
Let architecture be so much detailed that each code line has to follow a typical implementation pattern, therefore producing over engineered code;
Let architecture map the existing, with some middle-term objectives at best;
Let technology, frameworks or just-blogged ideas be used with no discrimination (do not trust the sirens of dev marketing).
Therefore, some advices:
Collaboration is a need - no one is alone, no team is better, no manager is always right;
Sharing is a need - between individuals, as teams, with managers;
Stay customer and content focused;
Long term is prepared by today's implementation;
Be lazy, i.e. try to make tomorrow's work easier for you and your team-workers;
They did not know it was impossible, so they did it.
Purpose of frameworks like mORMot is to provide your teams with working and integrated set of classes, so that you can focus on your product, enjoying the collaboration with other Open Source users, in order to use evolving and pertinent software architecture.
2.3. Model-View-Controller
The Model-View-Controller (MVC) is a software architecture, currently considered an architectural pattern used in software engineering. The pattern isolates "domain logic" (the application logic for the user) from the user interface (input and presentation), permitting independent development, testing and maintenance of each (separation of concerns).
Model View Controller processThe Model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller). In Event-Driven systems, the model notifies observers (usually views) when the information changes so that they can react - but since our ORM is stateless, it does not need to handle those events - see below.
The View renders the model into a form suitable for interaction, typically a user interface element. Multiple views can exist for a single model for different purposes. A viewport typically has a one to one correspondence with a display surface and knows how to render to it.
The Controller receives user input and initiates a response by making calls on model objects. A controller accepts input from the user and instructs the model and viewport to perform actions based on that input.
Model View Controller conceptIn the framework, the model is not necessarily merely a database; the model in MVC is both the data and the business/domain logic needed to manipulate the data in the application. In our ORM, a model is implemented via a TSQLModel class, which centralizes all TSQLRecord inherited classes used by an application, both database-related and business-logic related.
The views can be implemented using:
For Desktop clients, a full set of User-Interface units of the framework, which is mostly auto-generated from code - they will consume the model as reference for rendering the data;
For Web clients, an integrated high-speed Mustache rendering engine - see below - is able to render HTML pages with logic-less templates, and controller methods written in Delphi - see below;
For AJAX clients, the server side is easy to be reached from RESTful JSON services.
The controller is mainly already implemented in our framework, within the RESTful commands, and will interact with both the associated view (e.g. for refreshing the User Interface) and model (for data handling). Some custom actions, related to the business logic, can be implemented via some custom TSQLRecord classes or via custom RESTful Services - see below.
2.4. Multi-tier architecture
In software engineering, multi-tier architecture (often referred to as n-tier architecture) is a clientserver architecture in which the presentation, the application processing, and the data management are logically separate processes. For example, an application that uses middle-ware to service data requests between a user and a database employs multi-tier architecture. The most widespread use of multi-tier architecture is the three-tier architecture.
In practice, a typical VCL/FMX RAD application written in Delphi has a two-tier architecture:
Two-Tier Architecture - Logical ViewIn this approach, the Application Tier mixes the UI and the logic in forms and modules.
Both ORM and SOA aspects of our RESTful framework make it easy to develop using a more versatile three-tier architecture.
Multi-Tier Architecture - Logical ViewThe Synopse mORMot Framework follows this development pattern:
Data Tier is either SQLite3 and/or an internal very fast in-memory database; most SQL queries are created on the fly, and database table layout are defined from Delphi classes; you can also use any external database, currently SQLite3, Oracle, Jet/MSAccess, MS SQL, Firebird, DB2, PostgreSQL, MySQL, Informix and NexusDB SQL dialects are handled, and even NoSQL engines like MongoDB can be directly used - see below;
Logic Tier is performed by pure ORM aspect and SOA implementation: you write Delphi classes which are mapped by the Data Tier into the database, and you can write your business logic as Services called as Delphiinterface, up to a Domain-Driven design - see below - if your project reaches some level of complexity;
Presentation Tier is either a Delphi Client, or an AJAX application, because the framework can communicate using RESTful JSON over HTTP/1.1 (the Delphi Client User Interface is generated from Code, by using RTTI and structures, not as a RAD - and the Ajax applications need to be written by using your own tools and JavaScript framework, there is no "official" Ajax framework included yet).
In fact, mORMot can scales up to a Domain-Driven Design four-tier architecture - see below - as such:
Presentation Tier which can be e.g. a Delphi or AJAX client;
Application Tier which serves JSON content according to the client application;
Business Logic Tier which centralizes all the Domain processing, shared among all applications;
Persistence/Data Tier which can be either in-process (like SQLite3 or in-memory) or external (e.g. Oracle, MS SQL, DB2, PostgreSQL, MySQL, Informix...).
Note that you have to make a difference between physical and logical n-tier architecture. Most of the time, n-Tier is intended to be a physical (hardware) view, for instance a separation between the database server and the application server, placing the database on a separate machine to facilitate ease of maintenance. In mORMot, and more generally in SOA - see below, we deal with logical layout, with separation of layers through interfaces - see below - and the underlying hardware implementation will usually not match the logical layout.
Domain Driven Design n-Tier Architecture - Physical ViewIn this document, we will focus on the logical way of thinking / coding, letting the physical deployment be made according to end-user expectations.
2.5. Service-Oriented Architecture (SOA)
Service-Oriented Architecture (SOA) is a flexible set of design principles used during the phases of systems development and integration in computing. A system based on a SOA will package functionality as a suite of inter-operable services that can be used within multiple, separate systems from several business domains.
A software service is a logicical representation of a repeatable activity that produce a precise result. In short, a consumer ask to a producer to act in order to produce a result. Most of the time, this invocation is free from any previous invocation (it is therefore called stateless).
The SOA implementations rely on a mesh of software services. Services comprise unassociated, loosely coupled units of functionality that have no calls to each other embedded in them. Each service implements one action, such as filling out an online application for an account, or viewing an online bank statement, or placing an online booking or airline ticket order. Rather than services embedding calls to each other in their source code, they use defined protocols that describe how services pass and parse messages using description meta-data.
Service Oriented Architecture - Logical ViewSince most of those services are by definition stateless, some kind of service composition is commonly defined to provide some kind of logical multi-tier orchestration of services. A higher level service invokes several services to work as a self-contained, stateless service; as a result, lower-level services can still be stateless, but the consumer of the higher level service is able to safely process some kind of transactional process.
SOA is mainly about decoupling. That is, it enables implementation independence in a variety of ways, for instance:
Dependency
Desired decoupling
Decoupling technique
Platform
Hardware, Framework or Operating System should not constrain choices of the Services consumers
Standard protocols, mainly Web services (e.g. SOAP or RESTful/JSON)
Location
Consumers may not be impacted by service hosting changes
Routing and proxies will maintain Services access
Availability
Maintenance tasks shall be transparent
Remote access allows centralized support on Server side
Versions
New services shall be introduced without requiring upgrades of clients
Contract marshalling can be implemented on the Server side
SOA and ORM - see below - do not exclude themselves. In fact, even if some software architects tend to use only one of the two features, both can coexist and furthermore complete each other, in any Client-Server application:
ORM access could be used to access to the data with objects, that is with the native presentation of the Server or Client side (Delphi, JavaScript...) - so ORM can be used to provide efficient access to the data or the business logic - this is the idea of CQRS pattern;
SOA will provide a more advanced way of handling the business logic: with custom parameters and data types, it is possible to provide some high-level Services to the clients, hiding most of the business logic, and reducing the needed bandwidth.
In particular, SOA will help leaving the business logic on the Server side, therefore will help increasing the Multi-tier architecture. By reducing the back-and-forth between the Client and the Server, it will also reduce the network bandwidth, and the Server resources (it will always cost less to run the service on the Server than run the service on the Client, adding all remote connection and serialization to the needed database access). Our interface-based SOA model allows the same code to run on both the client and the server side, with a much better performance on the server side, but a full interoperability of both sides.
2.6. Object-Relational Mapping (ORM)
In practice, ORM gives a set of methods to ease high-level objects persistence into a RDBMS.
Our Delphiclass instances are not directly usable with a relational database, which is since decades the most convenient way of persisting data. So some kind of "glue" is needed to let class properties be saved into one or several tables. You can interact with the database using its native language, aka SQL. But SQL by itself is a full programming language, with diverse flavors depending on the exact backend engine (just think about how you define a column type able to store text). So writing and maintaining your SQL statements may become a time-consuming, difficult and error-prone task.
Sometimes, there will be nothing better than a tuned SQL statement, able to aggregate and join information from several tables. But most of the time, you will need just to perform some basic operations, known as CRUD (for Create Retrieve Update Delete actions) on well identified objects: this is where ORM may give you a huge hint, since it is able to generate the SQL statements for you.
The ORM works in fact as such:
ORM ProcessThe ORM core retrieve information to perform the mapping:
Object definition via its class type (via RTTI);
Database model as retrieved for each database engine.
ORM mapping
Since several implementation schemes are possible, we will first discuss the pros and the cons of each one.
First, here is a diagram presenting some common implementation schemes of database access with Delphi (which maps most other languages or frameworks, including C# or Java).
Why a Client-Server ORM
The table below is a very suggestive (but it doesn't mean wrong) Resumé of some common schemes, in the Delphi world. ORM is just one nice possibility among others.
Scheme
Pros
Cons
Use DB views and tables, with GUI components
- SQL is a powerful language - Can use high-level DB tools (UML) and RAD approach
- Business logic can't be elaborated without stored procedures - SQL code and stored procedures will bind you to a DB engine - Poor Client interaction - Reporting must call the DB directly - No Multi-tier architecture
Map DB tables or views with Delphi classes
- Can use elaborated business logic, in Delphi - Separation from UI and data
- SQL code must be coded by hand and synchronized with the classes - Code tends to be duplicated - SQL code could bind you to a DB engine - Reports can be made from code or via DB related tools - Difficult to implement true Multi-tier architecture
Use a Database ORM
- Can use very elaborated business logic, in Delphi - SQL code is generated (in most cases) by the ORM - ORM will adapt the generated SQL to the DB engine
- More abstraction needed at design time (no RAD approach) - In some cases, could lead to retrieve more data from DB than needed - Not yet a true Multi-tier architecture, because ORM is for DB access only and business logic will need to create separated classes
Use a Client-Server ORM
- Can use very elaborated business logic, in Delphi - SQL code is generated (in most cases) by the ORM - ORM will adapt the generated SQL to the DB engine - Services will allow to retrieve or process only needed data - Server can create objects viewed by the Client as if they were DB objects, even if they are only available in memory or the result of some business logic defined in Delphi - Complete Multi-tier architecture
- More abstraction needed at design time (no RAD approach)
Of course, you'll find out that our framework implements a Client-Server ORM, which can be down-sized to stand-alone mode if needed, but which is, thanks to its unique implementation, scalable to any complex Domain-Driven Design.
As far as we found out, looking at every language and technology around, almost no other ORM supports such a native Client-Server orientation. Usual practice is to use a Service-Oriented Architecture (SOA) for remote access to the ORM. Some projects allow remote access to an existing ORM, but they are separated projects. Our mORMot is pretty unique, in respect to its RESTful Client-Server orientation, from the ground up.
If you entered the Delphi world years ago, you may be pretty fluent with the RAD approach. But you probably also discovered how difficult it is to maintain an application which mixes UI components, business logic and database queries. Today's software users have some huge ergonomic expectations about software usability: some screens with grids and buttons, mapping the database, won't definitively be appealing. Using mORMot's ORM /SOA approach will help you focus on your business and your clients expectations, letting the framework perform most of the plumbing for you.
2.7. NoSQL and Object-Document Mapping (ODM)
SQL is the De-Facto standard for data manipulation
Schema-based;
Relational-based;
ACID by transactions;
Time proven and efficient;
"Almost" standard (each DB has its own column typing system).
NoSQL is a new paradigm, named as such in early 2009 (even if some database engines, like Lotus Domino, may fit the definition since decades):
NoSQL stands for "Not Only SQL" - which is more positive than "no SQL";
Designed to scale for the web and BigData (e.g. Amazon, Google, Facebook), e.g. via easy replication and simple API;
Relying on no standard (for both data modeling and querying);
A lot of diverse implementations, covering any data use - http://nosql-database.org lists more than 150 engines.
We can identify two main families of NoSQL databases:
Graph-oriented databases;
Aggregate-oriented databases.
Graph-oriented databases store data by their relations / associations:
NoSQL Graph DatabaseSuch kind of databases are very useful e.g. for developing any "social" software, which will value its data by the relations between every node. Such data model does not fit well with the relational model, whereas a NoSQL engine like Neo4j handles such kind of data natively. Note that by design, Graph-oriented databases are ACID.
But the main NoSQL database family is populated by the Aggregate-oriented databases. By Aggregate, we mean the same definition as will be used below for Domain Driven Design. It is a collection of data that we interact with as a unit, which forms the boundaries for ACID operations in a given model.
In fact, Aggregate-oriented databases can be specified as three main implementation/query patterns:
Document-based (e.g. MongoDB, CouchDB, RavenDB);
Key/Value (e.g. Redis, Riak, Voldemort);
Column family (e.g. Cassandra, HiBase).
Some of them can be schema-less (meaning that the data layout is not fixed, and can evolve on the fly without re-indexing the whole database) - but column-driven bases do have a schema, or even storing plain BLOB of data (this is the purpose of Key/Value engines, which focus on storage speed and rely on the client side to process the data).
In short, RDBMS stores data per table, and need to JOIN the references to get the aggregated information:
SQL Aggregate via JOINed tablesWhereas NoSQL stores its aggregates as documents: the whole data is embedded in one.
NoSQL Aggregate as one documentWhich may be represented as the following JSON - see below - data:
Such a document will fit directly with the object programming model, without the need of thinking about JOINed queries and database plumbing.
As a result, we can discuss the two data models:
Relational data Model with highly-structured table organization, and rigidly-defined data formats and record structure;
Document data Model as a collection of complex documents with arbitrary, nested data formats and varying "record" format.
The Relational model features normalization of data, i.e. organize the fields and tables of a relational database to minimize redundancy. On the other hand, the Document model features denormalization of data, to optimize the read performance of a database by adding redundant data or by grouping data. It also features horizontal scaling of the servers, since data can easily be balanced among several servers, without the speed penalty of performing a remote JOIN.
One of the main difficulties, when working with NoSQL, is to define how to denormalize the data, and when to store the data in normalized format. One good habit is to model your data depending on the most current queries you will have to perform. For instance, you may embed sub-documents which will be very likely to be requested by your application most of the time. Note that most NoSQL engines feature a projection mechanism, which allows you to return only the needed fields for a query, leaving the sub-documents on the server if you do not need them at this time. The less frequent queries may be executed over separated collections, populated e.g. with consolidated information. Since NoSQL databases have fewer hard-and-fast rules than their relational databases ancestors, you are more likely to tune your model, depending on your expectations. In practice, you may spend less time thinking about "how" to store the data than with a RDBMS, and are still able to normalize information later, if needed. NoSQL engines do not fear redundant information, as soon as you follow the rules of letting the client application take care of the whole data consistency (e.g. via one ORM).
As you may have stated, this Document data Model is much closer to the OOP paradigm than the classic relational scheme. Even a new family of frameworks did appear together with NoSQL adoption, named Object Document Mapping (ODM), which is what Object-Relational Mapping (ORM) was for RDBMS.
In short, both approaches have benefits, which are to be weighted.
SQL
NoSQL
Ubiquitous SQL
Map OOP and complex types (e.g. arrays or nested documents)
Easy vertical scaling
Uncoupled data: horizontal scaling
Data size (avoid duplicates and with no schema)
Schema-less: cleaner evolution
Data is stored once, therefore consistent
Version management (e.g. CouchDB)
Complex ACID statements
Graph storage (e.g. Redis)
Aggregation functions (depends)
Map/Reduce or Aggregation functions (e.g. since MongoDB 2.2)
With mORMot, you can switch from a classic SQL engine into a trendy MongoDB server, just in one line of code, when initializing the data on the server side. You can switch from ORM to ODM at any time, even at runtime, e.g. for a demanding customer.
Over the last decade or two, a philosophy has developed as an undercurrent in the object community. The premise of domain-driven design is two-fold:
For most software projects, the primary focus should be on the domain and domain logic;
Complex domain designs should be based on a model.
Domain-driven design is not a technology or a methodology. It is a way of thinking and a set of priorities, aimed at accelerating software projects that have to deal with complicated domains.
Of course, this particular architecture is customizable according to the needs of each project. We simply propose following an architecture that serves as a baseline to be modified or adapted by architects according to their needs and requirements.
2.8.2. Patterns
In respect to other kinds of Multi-tier architecture, DDD introduces some restrictive patterns, for a cleaner design:
Focus on the Domain - i.e. a particular kind of knowledge;
Define Bounded contexts within this domain;
Create an evolving Model of the domain, ready-to-be consumed by applications;
Identify some kind of objects - called Value objects or Entity Objects / Aggregates;
Use an Ubiquitous Language in resulting model and code;
Isolate the domain from other kind of concern (e.g. persistence should not be called from the domain layer - i.e. the domain should not be polluted by technical considerations, but rely on the Factory and Repository patterns);
Publish the domain as well-defined uncoupled Services;
Integrate the domain services with existing applications or legacy code.
The following diagram is a map of the patterns presented and the relationships between them. It is inspired from the one included in the Eric Evans's reference book, "Domain-Driven Design", Addison-Wesley, 2004 (and updated to take in account some points appeared since).
Domain-Driven Design - Building BlocksYou may recognize a lot of existing patterns you already met or implemented. What makes DDD unique is that those patterns have been organized around some clear concepts, thanks to decades of business software experiment.
2.8.3. Is DDD good for you?
Domain-Driven design is not to be used everywhere, and in every situation.
First of all, the following are prerequisite of using DDD:
Identified and well-bounded domain (e.g. your business target should be clearly identified);
You must have access to domain experts to establish a creative collaboration, in an iterative (may be agile) way;
Skilled team, able to write clean code - note also that since DDD is more about code expressiveness than technology, it may not appear so "trendy" to youngest developers;
You want your internal team to accumulate knowledge of the domain - therefore, outsourcing may be constrained to applications, not the core domain.
Then check that DDD is worth it, i.e. if:
It helps you solving the problem area you are trying to address;
It meets your strategic goals: DDD is to be used where you will get your business money, and make you distinctive from your competitors;
You need to bring clarity, and need to solve inner complexity, e.g. modeling a lot of rules (you won't use DDD to build simple applications - in this case, RAD may be enough);
Your business is exploring: your goal is identified, but you do not know how to accomplish it;
Don't have all of these concerns, but at least one or two.
2.8.4. Introducing DDD
Perhaps DDD sounds more appealing to you now. In this case, our mORMot framework will provide all the bricks you need to implement it, focusing on your domain and letting the libraries do all the needed plumbing. If you identified that DDD is not to be used now, you will always find with mORMot the tools you need, ready to switch to DDD when it will be necessary.
Legacy code and existing projects will benefit from DDD patterns. Finding so-called seams, along with isolating your core domain, can be extremely valuable when using DDD techniques to refactor and tighten the highest value parts of your code. It is not mandatory to re-write your whole existing software with DDD patterns everywhere: once you have identified where your business strategy's core is, you can introduce DDD progressively in this area. Then, following continuous feedback, you will refine your code, adding regression tests, and isolating your domain code from end-user code.
For a technical introduction about DDD and how mORMot can help you implement this design, see below.
With mORMot, your software solution will never be stuck in a dead-end. You'll be able to always adapt to your customers need, and maximize your ROI.
3. Enter new territory
3.1. Meet the mORMot
The Synopse mORMot framework consists in a huge number of units, so we will start by introducing them.
mORMot Source Code Main Units
3.2. Main units
The main units you have to be familiar with are the following:
Other units are available in the framework source code repository, but are either expected by those files above (e.g. like SynDB*.pas database providers), or used only optionally in end-user cross-platform client applications (e.g. the CrossPlatform folder).
In the following pages, the features offered by those units will be presented. Do not forget to take a look at all sample projects available in the SQLite3\Samples sub-folders - nothing is better than some simple code to look at.
Then detailed information will be available in the second part of this document - see below.
4. SynCommons unit
Adopt a mORMotFirst of all, let us introduce some cross-cutting features, used everywhere in the Synopse source code. Even if you do not need to go deeply into the implementation details, it will help you not be disturbed with some classes and types you may encounter in the framework source, and its documentation.
It was a design choice to use some custom low-level types, classes and functions instead of calling the official Delphi RTL. Benefits could be:
Cross-platform and cross-compiler support (e.g. leverage specificities, about memory model or RTTI);
Unicode support for all versions of Delphi, even before Delphi 2009, or with FPC;
Optimized for process speed, multi-thread friendliness and re-usability;
Sharing of most common features (e.g. for text/data processing);
KISS and consistent design.
In order to use Synopse mORMot framework, you should better be familiar with some of those definitions.
First of all, a Synopse.inc include file is provided, and appears in most of the framework units:
TDocVariant custom variant type for dynamic schema-less object or array storage.
Other shared features available in SynTests.pas and SynLog.pas will be detailed later, i.e. Testing and Logging - see below.
4.1. Unicode and UTF-8
Our mORMot Framework has 100% UNICODE compatibility, that is compilation under Delphi 2009 and up (including the latest available Delphi version). The code has been deeply rewritten and tested, in order to provide compatibility with the String=UnicodeString paradigm of these compilers. But the code will also handle safely Unicode for older versions, i.e. from Delphi 6 up to Delphi 2007.
From its core to its uppermost features, our framework is natively UTF-8, which is the de-facto character encoding for JSON, SQLite3, and most supported database engines. This allows our code to offer fast streaming/parsing in a SAX-like mode, avoiding any conversion between encodings from the storage layer to your business logic. We also needed to establish a secure way to use strings, in order to handle all versions of Delphi (even pre-Unicode versions, especially the Delphi 7 version we like so much), and provide compatibility with the FreePascal Compiler. This consistency allows to circumvent any RTL bug or limitation, and ease long-term support of your project.
Some string types have been defined, and used in the code for best cross-compiler efficiency:
RawUTF8 is used for every internal data usage, since both SQLite3 and JSON do expect UTF-8 encoding;
WinAnsiString where WinAnsi-encoded AnsiString (code page 1252) are needed;
Generic string for i18n (e.g. in unit mORMoti18n), i.e. text ready to be used within the VCL, as either AnsiString (for Delphi 2 to 2007) or UnicodeString (for Delphi 2009 and later);
RawUnicode in some technical places (e.g. direct Win32 *W() API call in Delphi 7) - note: this type is NOT compatible with Delphi 2009 and later UnicodeString;
SynUnicode is the fastest available Unicode native string type, depending on the compiler used (i.e. WideString before Delphi 2009, and UnicodeString since);
Some special conversion functions to be used for Delphi 2009+ UnicodeString (defined inside {$ifdef UNICODE}...{$endif} blocks);
Never use AnsiString directly, but one of the types above.
Note that RawUTF8 is the preferred string type to be used in our framework when defining textual properties in a TSQLRecord and for all internal data processing. It is only when you're reaching the User Interface layer that you may convert explicitly the RawUTF8 content into the generic VCL string type, using either the Language. UTF8ToString method (from mORMoti18n.pas unit) or the following function from SynCommons.pas:
/// convert any UTF-8 encoded String into a generic VCL Text// - it's prefered to use TLanguageFile.UTF8ToString() in mORMoti18n.pas,// which will handle full i18n of your application// - it will work as is with Delphi 2009+ (direct unicode conversion)// - under older version of Delphi (no unicode), it will use the// current RTL codepage, as with WideString conversion (but without slow// WideString usage)functionUTF8ToString(const Text: RawUTF8): string;
Of course, the StringToUTF8 method or function are available to send back some text to the ORM layer. A lot of dedicated conversion functions (including to/from numerical values) are included in SynCommons.pas. Those were optimized for speed and multi-thread capabilities, and to avoid implicit conversions involving a temporary string variable.
Warning during the compilation process are not allowed, especially under Unicode version of Delphi (e.g. Delphi 2010): all string conversion from the types above are made explicitly in the framework's code, to avoid any unattended data loss.
If you are using older version of Delphi, and have an existing code base involving a lot of WideString variables, you may take a look at the SynFastWideString.pas unit. Adding this unit in the top of your .dpr uses clauses will let all WideString process use the Delphi heap and its very efficient FastMM4 memory manager, instead of the much slower BSTR Windows API. Performance gain can be more than 50 times, if your existing code uses a lot of WideString variables. Note that using this unit will break the compatibility with BSTR/COM/OLE kind of string, so is not to be used with COM objects. In all cases, if you need Unicode support with older versions of Delphi, consider using our RawUTF8 type instead, which is much better integrated with our framework, and has less overhead.
4.2. Currency handling
Faster and safer way of comparing two currency values is certainly to map the variables to their internal Int64 binary representation, as such:
function CompCurrency(var A,B: currency): Int64;
var A64: Int64 absolute A;
B64: Int64 absolute B;
begin
result := A64-B64;
end;
This will avoid any rounding error during comparison (working with *10000 integer values), and is likely to be faster than the default implementation, which uses the FPU (or SSE2 under x64 architecture) instructions.
Some direct currency processing is available in the SynCommons.pas unit. It will by-pass the FPU use, and is therefore very fast. There are some functions using the Int64 binary representation (accessible either as PInt64(@aCurrencyVar)^ or the absolute syntax):
function StrCurr64(P: PAnsiChar; const Value: Int64): PAnsiChar;
Using those functions can be much faster for textual conversion than using the standard FloatToText() implementation. They are validated with provided regression tests.
Of course, in normal code, it is certainly not worth using the Int64 binary representation of currency, but rely on the default compiler/RTL implementation. In all cases, having optimized functions was a need for both speed and accuracy of our ORM data processing, and also for below.
Note that we discovered some issue in the FPC compiler, when currency is used when compiling from x64-win64: currency values comparison may be wrongly implemented using x87 registers: we found out that using a i386-win32 FPC compiler is a safer approach, even targetting x64-win64 - at least for the trunk in 2019/11.
4.3. TDynArray dynamic array wrapper
Version 1.13 of the SynCommons.pas unit introduced two kinds of wrapper:
With TDynArray, you can access any dynamic array (like TIntegerDynArray = array of integer) using TList-like properties and methods, e.g. Count, Add, Insert, Delete, Clear, IndexOf, Find, Sort and some new methods like LoadFromStream, SaveToStream, LoadFrom, SaveTo, Slice, Reverse, and AddArray. It includes e.g. fast binary serialization of any dynamic array, even containing strings or records - a CreateOrderedIndex method is also available to create individual index according to the dynamic array content. You can also serialize the array content into JSON, if you wish.
One benefit of dynamic arrays is that they are reference-counted, so they do not need any Create/try..finally...Free code, and are well handled by the Delphi compiler. For performance-critical tasks, dynamic array access is very optimized, since its whole content will be allocated at once, therefore reducing the memory fragmentation and being much more CPU cache friendly.
Dynamic arrays are no replacement to a TCollection nor a TList (which are the standard and efficient way of storing class instances, and are also handled as published properties since revision 1.13 of the framework), but they are very handy way of having a list of content or a dictionary at hand, with no previous class nor properties definition.
You can look at them like Python's list, tuples (via records handling) and dictionaries (via Find method, especially with the dedicated TDynArrayHashed wrapper), in pure Delphi. Our new methods (about searching and serialization) allow most usage of those script-level structures in your Delphi code.
In order to handle dynamic arrays in our ORM, some RTTI-based structure were designed for this task. Since dynamic array of records should be necessary, some low-level fast access to the record content, using the common RTTI, has also been implemented (much faster than the "new" enhanced RTTI available since Delphi 2010).
4.3.1. TList-like properties
Here is how you can have method-driven access to the dynamic array:
type
TGroup: arrayof integer;
var
Group: TGroup;
GroupA: TDynArray;
i, v: integer;
begin
GroupA.Init(TypeInfo(TGroup),Group); // associate GroupA with Groupfor i := 0 to 1000 dobegin
v := i+1000; // need argument passed as a const variable
GroupA.Add(v);
end;
v := 1500;
if GroupA.IndexOf(v)<0 then// search by contentShowMessage('Error: 1500 not found!');
for i := GroupA.Count-1 downto 0 doif i and 3=0 then
GroupA.Delete(i); // delete integer at index iend;
This TDynArray wrapper will work also with array of string or array of record...
Records need only to be packed and have only not reference counted fields (byte, integer, double...) or string or variant reference-counted fields (there is no support of nested Interface yet). TDynArray is able to handle record within record, and even dynamic arrays within record.
Yes, you read well: it will handle a dynamic array of record, in which you can put some string or whatever data you need.
The IndexOf() method will search by content. That is e.g. for an array of record, all record fields content (including string properties) must match.
Note that TDynArray is just a wrapper around an existing dynamic array variable. In the code above, Add and Delete methods are modifying the content of the Group variable. You can therefore initialize a TDynArray wrapper on need, to access more efficiently any native Delphidynamic array. TDynArray doesn't contain any data: the elements are stored in the dynamic array variable, not in the TDynArray instance.
4.3.2. Enhanced features
Some methods were defined in the TDynArray wrapper, which are not available in a plain TList - with those methods, we come closer to some native generics implementation:
Now you can save and load a dynamic array content to or from a stream or a string (using LoadFromStream/SaveToStream or LoadFrom/SaveTo methods) - it will use a proprietary but very fast binary stream layout;
And you can sort the dynamic array content by two means: either in-place (i.e. the array elements content is exchanged - use the Sort method in this case) or via an external integer index look-up array (using the CreateOrderedIndex method - in this case, you can have several orders to the same data);
You can specify any custom comparison function, and there is a new Find method will can use fast binary search if available.
Here is how those new methods work:
var
Test: RawByteString;
...
Test := GroupA.SaveTo;
GroupA.Clear;
GroupA.LoadFrom(Test);
GroupA.Compare := SortDynArrayInteger;
GroupA.Sort;
for i := 1 to GroupA.Count-1 doif Group[i]<Group[i-1] thenShowMessage('Error: unsorted!');
v := 1500;
if GroupA.Find(v)<0 then// fast binary searchShowMessage('Error: 1500 not found!');
Some unique methods like Slice, Reverse or AddArray are also available, and mimic well-known Python methods.
Still closer to the generic paradigm, working for Delphi 6 up to the latest available Delphi version, without the need of the slow enhanced RTTI, nor the executable size overhead and compilation issues of generics...
4.3.3. Capacity handling via an external Count
One common speed issue with the default usage of TDynArray is that the internal memory buffer is reallocated when you change its length, just like a regular Delphidynamic array.
That is, whenever you call Add or Delete methods, an internal call to SetLength(DynArrayVariable) is performed. This could be slow, because it always executes some extra code, including a call to ReallocMem.
In order not to suffer for this, you can define an external Count value, as an Integer variable.
In this case, the Length(DynArrayVariable) will be the memory capacity of the dynamic array, and the exact number of stored item will be available from this Count variable. A Count property is exposed by TDynArray, and will always reflect the number of items stored in the dynamic array. It will point either to the external Count variable, if defined; or it will reflect the Length(DynArrayVariable), just as usual. A Capacity property is also exposed by TDynArray, and will reflect the capacity of the dynamic array: in case of an external Count variable, it will reflect Length(DynArrayVariable).
As a result, adding or deleting items could be much faster.
var
Group: TIntegerDynArray;
GroupA: TDynArray;
GroupCount, i, v: integer;
begin
GroupA.Init(TypeInfo(TGroup),Group,@GroupCount);
GroupA.Capacity := 1023; // reserver memoryfor i := 0 to 1000 dobegin
v := i+1000; // need argument passed as a const variable
GroupA.Add(v); // faster than with no external GroupCount variableend;
Check(GroupA.Count=1001);
Check(GroupA.Capacity=1023);
Check(GroupA.Capacity=length(Group));
4.3.4. JSON serialization
The TDynArray wrapper features some native JSON serialization abilities: TTextWriter. AddDynArrayJSON and TDynArray. LoadFromJSON methods are available for UTF-8 JSON serialization of dynamic arrays.
See below for all details about this unique feature.
4.3.5. Daily use
The TTestLowLevelCommon._TDynArray and _TDynArrayHashed methods implement the automated unitary tests associated with these wrappers.
You'll find out there samples of dynamic array handling and more advanced features, with various kind of data (from plain TIntegeryDynArray to records within records).
The TDynArrayHashed wrapper allow implementation of a dictionary using a dynamic array of record. For instance, the prepared statement cache is handling by the following code in SynSQLite3.pas:
The TDynArrayHashed.Init method will recognize that the first TSQLStatementCache field is a RawUTF8, so will set by default an AnsiString hashing of this first field (we could specify a custom hash function or content hashing by overriding the default nil parameters to some custom functions).
So we can specify directly a GenericSQL variable as the first parameter of FindHashedForAdding, since this method will only access to the first field RawUTF8 content, and won't handle the whole record content. In fact, the FindHashedForAdding method will be used to make all the hashing, search, and new item adding if necessary - just in one step. Note that this method only prepare for adding, and code needs to explicitly set the StatementSQL content in case of an item creation:
functionTSQLStatementCached.Prepare(const GenericSQL: RaWUTF8): PSQLRequest;
var added: boolean;
beginwith Cache[Caches.FindHashedForAdding(GenericSQL,added)] dobeginif added thenbeginStatementSQL := GenericSQL; // need explicit set the content
Statement.Prepare(DB,GenericSQL);
endelsebegin
Statement.Reset;
Statement.BindReset;
end;
result := @Statement;
end;
end;
The latest method of TSQLStatementCached will just loop for each statement, and close them: you can note that this code uses the dynamic array just as usual:
procedureTSQLStatementCached.ReleaseAllDBStatements;
var i: integer;
beginfor i := 0 to Count-1 do
Cache[i].Statement.Close; // close prepared statement
Caches.Clear; // same as SetLength(Cache,0) + Count := 0end;
The resulting code is definitively quick to execute, and easy to read/maintain.
4.3.6. TDynArrayHashed
If your purpose is to access a dynamic array using one of its fields as key, consider using TDynArrayHashed. This wrapper, inheriting from TDynArray, will store an hashed index of one field of the dynamic array record, for very efficient lookup. For a few dozen entries, it won't change the performance, but once you reach thousands of items, an index will be much faster - almost O(1) instead of O(n).
One step further is available with the TSynDictionary class. It is a thread-safe dictionary to store some values from associated keys, as two separated dynamic arrays.
Each TSynDictionary instance will hold and store the associated dynamic arrays - this is not the case with TDynArray and TDynArrayHashed, which are only wrappers around an existing dynamic array variable.
One big advantage is that access to TSynDictionary methods are thread-safe by design: internally, a TSynLock will protect the keys, maintained by a TDynArrayHashed instance, and the values by a TDynArray. Access to/from local variables will be made via explicit copy, for perfect thread safety.
For advanced use, the TSynDictionary offers JSON serialization and binary storage (with optional compression), and the ability to specify a timeout period in seconds, after which any call to TSynDictionary.DeleteDeprecated will delete older entries - which is very convenient to cache values, with optional persistence on disk. Just like your own in-process Redis/MemCached instance.
4.4. TDocVariant custom variant type
With revision 1.18 of the framework, we introduced two new custom types of variants:
The second custom type (which handles MongoDB-specific extensions - like ObjectID or other specific types like dates or binary) will be presented later, when dealing with MongoDB support in mORMot, together with the BSON kind of content. BSON / MongoDB support is implemented in the SynMongoDB.pas unit.
We will now focus on TDocVariant itself, which is a generic container of JSON-like objects or arrays. This custom variant type is implemented in SynCommons.pas unit, so is ready to be used everywhere in your code, even without any link to the mORMot ORM kernel, or MongoDB.
4.4.1. TDocVariant documents
TDocVariant implements a custom variant type which can be used to store any JSON/BSON document-based content, i.e. either:
Name/value pairs, for object-oriented documents (internally identified as dvObject sub-type);
An array of values (including nested documents), for array-oriented documents (internally identified as dvArray sub-type);
Any combination of the two, by nesting TDocVariant instances.
Here are the main features of this custom variant type:
DOM approach of any object or array documents;
Perfect storage for dynamic value-objects content, with a schema-less approach (as you may be used to in scripting languages like Python or JavaScript);
Allow nested documents, with no depth limitation but the available memory;
Assignment can be either per-value (default, safest but slower when containing a lot of nested data), or per-reference (immediate reference-counted assignment);
Very fast JSON serialization / un-serialization with support of MongoDB-like extended syntax;
Access to properties in code, via late-binding (including almost no speed penalty due to our VCL hack as detailed in SDD # DI-2.2.3);
Direct access to the internal variant names and values arrays from code, by trans-typing into a TDocVariantData record;
Instance life-time is managed by the compiler (like any other variant type), without the need to use interfaces or explicit try..finally blocks;
Optimized to use as little memory and CPU resource as possible (in contrast to most other libraries, it does not allocate one class instance per node, but rely on pre-allocated arrays);
Opened to extension of any content storage - for instance, it will perfectly integrate with BSON serialization and custom MongoDB types (ObjectID, Decimal128, RegEx...), to be used in conjunction with MongoDB servers;
Designed to work with our mORMotORM: any TSQLRecord instance containing such variant custom types as published properties will be recognized by the ORM core, and work as expected with any database back-end (storing the content as JSON in a TEXT column);
Designed to work with our mORMotSOA: any interface-based service - see below - is able to consume or publish such kind of content, as variant kind of parameters;
Fully integrated with the Delphi IDE: any variant instance will be displayed as JSON in the IDE debugger, making it very convenient to work with.
To create instances of such variant, you can use some easy-to-remember functions:
_Obj() _ObjFast() global functions to create a variantobject document;
_Arr() _ArrFast() global functions to create a variantarray document;
_Json() _JsonFast() _JsonFmt() _JsonFastFmt() global functions to create any variantobject or array document from JSON, supplied either with standard or MongoDB-extended syntax.
You have two non excluding ways of using the TDocVariant storage:
As regular variant variables, then using either late-binding or faster _Safe() to access its data;
Directly as TDocVariantData variables, then later on returing a variant instance using variant(aDocVariantData).
Note that you do not need to protect any stack-allocated TDocVariantData instance with a try..finally, since the compiler will do it for you. This record type has a lot of powerful methods, e.g. to apply map/reduce on the content, or do advanced searchs or marshalling.
4.4.1.1. Variant object documents
The more straightforward is to use late-binding to set the properties of a new TDocVariant instance:
var V: variant;
...
TDocVariant.New(V); // or slightly slower V := TDocVariant.New;
V.name := 'John';
V.year := 1972;
// now V contains {"name":"john","year":1982}
With _Obj(), an objectvariant instance will be initialized with data supplied two by two, as Name,Value pairs, e.g.
var V1,V2: variant; // stored as any variant
...
V1 := _Obj(['name','John','year',1972]);
V2 := _Obj(['name','John','doc',_Obj(['one',1,'two',2.5])]); // with nested objects
Then you can convert those objects into JSON, by two means:
Using the VariantSaveJson() function, which return directly one UTF-8 content;
Or by trans-typing the variant instance into a string (this will be slower, but is possible).
writeln(VariantSaveJson(V1)); // explicit conversion into RawUTF8
writeln(V1); // implicit conversion from variant into string// both commands will write '{"name":"john","year":1982}'
writeln(VariantSaveJson(V2)); // explicit conversion into RawUTF8
writeln(V2); // implicit conversion from variant into string// both commands will write '{"name":"john","doc":{"one":1,"two":2.5}}'
As a consequence, the Delphi IDE debugger is able to display such variant values as their JSON representation. That is, V1 will be displayed as '{"name":"john","year":1982}' in the IDE debugger Watch List window, or in the Evaluate/Modify (F7) expression tool. This is pretty convenient, and much more user friendly than any class-based solution (which requires the installation of a specific design-time package in the IDE).
You can access to the object properties via late-binding, with any depth of nesting objects, in your code:
writeln('name=',V1.name,' year=',V1.year);
// will write 'name=John year=1972'
writeln('name=',V2.name,' doc.one=',V2.doc.one,' doc.two=',doc.two);
// will write 'name=John doc.one=1 doc.two=2.5
V1.name := 'Mark'; // overwrite a property value
writeln(V1.name); // will write 'Mark'
V1.age := 12; // add a property to the object
writeln(V1.age); // will write '12'
Note that the property names will be evaluated at runtime only, not at compile time. For instance, if you write V1.nome instead of V1.name, there will be no error at compilation, but an EDocVariant exception will be raised at execution (unless you set the dvoReturnNullForUnknownProperty option to _Obj/_Arr/_Json/_JsonFmt which will return a null variant for such undefined properties).
In addition to the property names, some pseudo-methods are available for such objectvariant instances:
writeln(V1._Count); // will write 3 i.e. the number of name/value pairs in the object document
writeln(V1._Kind); // will write 1 i.e. ord(dvObject)for i := 0 to V2._Count-1 do
writeln(V2.Name(i),'=',V2.Value(i));
// will write to the console:// name=John// doc={"one":1,"two":2.5}// age=12if V1.Exists('year') then
writeln(V1.year);
V1.Add('key','value'); // add one property to the object
The variant values returned by late-binding are generated as varByRef, so it has two benefits:
Much better performance, even if the nested objects are created per-value (see below);
Allow nested calls of pseudo methods, as such:
var V: variant;
...
V := _Json('{arr:[1,2]}');
V.arr.Add(3); // will work, since V.arr is returned by reference (varByRef)
writeln(V); // will write '{"arr":[1,2,3]}'
V.arr.Delete(1);
writeln(V); // will write '{"arr":[1,3]}'
You may also trans-type your variant instance into a TDocVariantData record, and access directly to its internals. For instance:
TDocVariantData(V1).AddValue('comment','Nice guy');
withTDocVariantData(V1) do// direct transtypingif Kind=dvObject then// direct access to the TDocVariantKind fieldfor i := 0 to Count-1 do// direct access to the Count: integer field
writeln(Names[i],'=',Values[i]); // direct access to the internal storage arrays
By definition, trans-typing via a TDocVariantData record is slightly faster than using late-binding.
But you must ensure that the variant instance is really a TDocVariant kind of data before transtyping e.g. by calling _Safe(aVariant)^ function (or DocVariantType.IsOfType(aVariant) or DocVariantData(aVariant)^), which will work even for members returned as varByRef via late binding (e.g. V2.doc):
with_Safe(V1)^ do// note ^ to de-reference into TDocVariantDatafor ndx := 0 to Count-1 do// direct access to the Count: integer field
writeln(Names[ndx],'=',Values[ndx]); // direct access to the internal storage arrays
writeln(V2.doc); // will write '{"name":"john","doc":{"one":1,"two":2.5}}'ifDocVariantType.IsOfType(V2.Doc) then// will be false, since V2.Doc is a varByRef variant
writeln('never run'); // .. so TDocVariantData(V2.doc) will failwithDocVariantData(V2.Doc)^ do// note ^ to de-reference into TDocVariantDatafor ndx := 0 to Count-1 do// direct access the TDocVariantData methods
writeln(Names[ndx],'=',Values[ndx]);
// will write to the console:// one=1// two=2.5
In practice, _Safe(aVariant)^ may be preferred, since DocVariantData(aVariant)^ will raise an EDocVariant exception if aVariant is not a TDocVariant, but _Safe(aVariant)^ will return a "fake" void DocVariant instance, in which Count=0 and Kind=dbUndefined.
The TDocVariantData type features some additional U[] I[] B[] D[] O[] O_[] A[] A_[] _[] properties, which could be used to have direct typed access to the data, as RawUTF8, Int64/integer, Double, or checking if the nested document is an O[]bject or an A[]rray.
You can also allocate directly the TDocVariantData instance on stack, if you do not need any variant-oriented access to the object, but just some local storage:
var Doc1,Doc2: TDocVariantData;
...
Doc1.Init; // needed for proper initialization
assert(Doc1.Kind=dvUndefined);
Doc1.AddValue('name','John'); // add some properties
Doc1.AddValue('birthyear',1972);
assert(Doc1.Kind=dvObject); // is now identified as an object
assert(Doc1.Value['name']='John'); // read access to the properties (also as varByRef)
assert(Doc1.Value['birthyear']=1972);
assert(Doc1.U['name']='John'); // slightly faster read access
assert(Doc1.I['birthyear']=1972);
writeln(Doc1.ToJSON); // will write '{"name":"John","birthyear":1972}'
Doc1.Value['name'] := 'Jonas'; // update one property
writeln(Doc1.ToJSON); // will write '{"name":"Jonas","birthyear":1972}'
Doc2.InitObject(['name','John','birthyear',1972],
aOptions+[dvoReturnNullForUnknownProperty]); // initialization from name/value pairs
assert(Doc2.Kind=dvObject);
assert(Doc2.Count=2);
assert(Doc2.Names[0]='name');
assert(Doc2.Values[0]='John');
writeln(Doc2.ToJSON); // will write '{"name":"John","birthyear":1972}'
Doc2.Delete('name');
writeln(Doc2.ToJSON); // will write '{"birthyear":1972}'
assert(Doc2.U['name']='');
assert(Doc2.I['birthyear']=1972);
Doc2.U['name'] := 'Paul';
Doc2.I['birthyear'] := 1982;
writeln(Doc2.ToJSON); // will write '{"name":"Paul","birthyear":1982}'
You do not need to protect the stack-allocated TDocVariantData instances with a try..finally, since the compiler will do it for your. Take a look at all the methods and properties of TDocVariantData.
4.4.1.2. FPC restrictions
You should take note that with the FreePascal compiler, calling late-binding functions with arguments (like Add or Delete) would most probably fail to work as expected. We have found out that the following code may trigger some random access violations:
doc.Add('text');
doc.Add(anotherdocvariant);
So you should rather access directly the underlying TDocVariantData instance:
In fact, late-binding functions arguments seem to work only for simple values (like integer or double), but not complex types (like string or other TDocVariantData), which generate some random GPF, especially when heaptrc paranoid memory checks are enabled.
As a result, direct access to TJSONVariantData instances - preferably via _Safe(), and not a variant variable, will be faster and less error-prone when using FPC.
4.4.1.3. Variant array documents
With _Arr(), an arrayvariant instance will be initialized with data supplied as a list of Value1,Value2,..., e.g.
var V1,V2: variant; // stored as any variant
...
V1 := _Arr(['John','Mark','Luke']);
V2 := _Obj(['name','John','array',_Arr(['one','two',2.5])]); // as nested array
Then you can convert those objects into JSON, by two means:
Using the VariantSaveJson() function, which return directly one UTF-8 content;
Or by trans-typing the variant instance into a string (this will be slower, but is possible).
writeln(VariantSaveJson(V1));
writeln(V1); // implicit conversion from variant into string// both commands will write '["John","Mark","Luke"]'
writeln(VariantSaveJson(V2));
writeln(V2); // implicit conversion from variant into string// both commands will write '{"name":"john","array":["one","two",2.5]}'
As a with any object document, the Delphi IDE debugger is able to display such arrayvariant values as their JSON representation.
Late-binding is also available, with a special set of pseudo-methods:
writeln(V1._Count); // will write 3 i.e. the number of items in the array document
writeln(V1._Kind); // will write 2 i.e. ord(dvArray)for i := 0 to V1._Count-1 do
writeln(V1.Value(i),':',V2._(i)); // Value() or _() pseudo-methods// will write in the console:// John John// Mark Mark// Luke Lukeif V1.Exists('John') then// Exists() pseudo-method
writeln('John found in array');
V1.Add('new item'); // add "new item" to the array
V1._ := 'another new item'; // add "another new item" to the array
writeln(V1); // will write '["John","Mark","Luke","new item","another new item"]'
V1.Delete(2);
V1.Delete(1);
writeln(V1); // will write '["John","Luke","another new item"]'
When using late-binding, the object properties or array items are retrieved as varByRef, so you can even run the pseudo-methods on any nested member:
V := _Json('["root",{"name":"Jim","year":1972}]');
V.Add(3.1415);
assert(V='["root",{"name":"Jim","year":1972},3.1415]');
V._(1).Delete('year'); // delete a property of the nested object
assert(V='["root",{"name":"Jim"},3.1415]');
V.Delete(1); // delete an item in the main array
assert(V='["root",3.1415]');
Of course, trans-typing into a TDocVariantData record is possible, and will be slightly faster than using late-binding. As usual, using _Safe(aVariant)^ function is safer, especially when working on varByRef members returned via late-binding.
As with an object document, you can also allocate directly the TDocVariantData instance on stack, if you do not need any variant-oriented access to the array:
var Doc: TDocVariantData;
...
Doc.Init; // needed for proper initialization - see also Doc.InitArray()
assert(Doc.Kind=dvUndefined); // this instance has no defined sub-type
Doc.AddItem('one'); // add some items to the array
Doc.AddItem(2);
assert(Doc.Kind=dvArray); // is now identified as an array
assert(Doc.Value[0]='one'); // direct read access to the items
assert(Doc.Values[0]='one'); // with index check
assert(Doc.Count=2);
writeln(Doc.ToJSON); // will write '["one",2]'
Doc.Delete(0);
assert(Doc.Count=1);
writeln(Doc.ToJSON); // will write '[2]'
You could use the A[] property to retrieve an object property as a TDocVariant array, or the A_[] property to add a missing array property to an object, for instance:
Doc.Clear; // reset the previous Doc content
writeln(Doc.A['test']); // will write 'null'
Doc.A_['test']^.AddItems([1,2]);
writeln(Doc.ToJSON); // will write '{"test":[1,2]}'
writeln(Doc.A['test']); // will write '[1,2]'
Doc.A_['test']^.AddItems([3,4]);
writeln(Doc.ToJSON); // will write '{"test":[1,2,3,4]}'
4.4.1.4. Create variant object or array documents from JSON
With _Json() or _JsonFmt(), either a document or arrayvariant instance will be initialized with data supplied as JSON, e.g.
var V1,V2,V3,V4: variant; // stored as any variant
...
V1 := _Json('{"name":"john","year":1982}'); // strict JSON syntax
V2 := _Json('{name:"john",year:1982}'); // with MongoDB extended syntax for names
V3 := _Json('{"name":?,"year":?}',[],['john',1982]);
V4 := _JsonFmt('{%:?,%:?}',['name','year'],['john',1982]);
writeln(VariantSaveJSON(V1));
writeln(VariantSaveJSON(V2));
writeln(VariantSaveJSON(V3));
// all commands will write '{"name":"john","year":1982}'
Of course, you can nest objects or arrays as parameters to the _JsonFmt() function.
The supplied JSON can be either in strict JSON syntax, or with the MongoDBextended syntax, i.e. with unquoted property names. It could be pretty convenient and also less error-prone when typing in the Delphi code to forget about quotes around the property names of your JSON.
Note that TDocVariant implements an open interface for adding any custom extensions to JSON: for instance, if the SynMongoDB.pas unit is defined in your application, you will be able to create any MongoDB specific types in your JSON, like ObjectID(), NumberDecimal(""...") ,new Date() or even /regex/option.
As a with any object or array document, the Delphi IDE debugger is able to display such variant values as their JSON representation.
4.4.1.5. Per-value or per-reference
By default, the variant instance created by _Obj() _Arr() _Json() _JsonFmt() will use a copy-by-value pattern. It means that when an instance is affected to another variable, a new variant document will be created, and all internal values will be copied. Just like a record type.
This will imply that if you modify any item of the copied variable, it won't change the original variable:
var V1,V2: variant;
...
V1 := _Obj(['name','John','year',1972]);
V2 := V1; // create a new variant, and copy all values
V2.name := 'James'; // modifies V2.name, but not V1.name
writeln(V1.name,' and ',V2.name);
// will write 'John and James'
As a result, your code will be perfectly safe to work with, since V1 and V2 will be uncoupled.
But one drawback is that passing such a value may be pretty slow, for instance, when you nest objects:
var V1,V2: variant;
...
V1 := _Obj(['name','John','year',1972]);
V2 := _Arr(['John','Mark','Luke']);
V1.names := V2; // here the whole V2 array will be re-allocated into V1.names
Such a behavior could be pretty time and resource consuming, in case of a huge document.
All _Obj() _Arr() _Json() _JsonFmt() functions have an optional TDocVariantOptions parameter, which allows to change the behavior of the created TDocVariant instance, especially setting dvoValueCopiedByReference.
This particular option will set the copy-by-reference pattern:
var V1,V2: variant;
...
V1 := _Obj(['name','John','year',1972],[dvoValueCopiedByReference]);
V2 := V1; // creates a reference to the V1 instance
V2.name := 'James'; // modifies V2.name, but also V1.name
writeln(V1.name,' and ',V2.name);
// will write 'James and James'
You may think that this behavior is somewhat weird for a variant type. But if you forget about per-value objects and consider those TDocVariant types as a Delphiclass instance (which is a per-reference type), without the need of having a fixed schema nor handling manually the memory, it will probably start to make sense.
Note that a set of global functions have been defined, which allows direct creation of documents with per-reference instance lifetime, named _ObjFast() _ArrFast() _JsonFast() _JsonFmtFast(). Those are just wrappers around the corresponding _Obj() _Arr() _Json() _JsonFmt() functions, with the following JSON_OPTIONS[true] constant passed as options parameter:
const/// some convenient TDocVariant options// - JSON_OPTIONS[false] is _Json() and _JsonFmt() functions default// - JSON_OPTIONS[true] are used by _JsonFast() and _JsonFastFmt() functionsJSON_OPTIONS: array[Boolean] ofTDocVariantOptions = (
[dvoReturnNullForUnknownProperty],
[dvoReturnNullForUnknownProperty,dvoValueCopiedByReference]);
When working with complex documents, e.g. with BSON / MongoDB documents, almost all content will be created in "fast" per-reference mode.
4.4.2. Advanced TDocVariant process
4.4.2.1. Number values options
By default, TDocVariantData will only recognize integer, Int64 and currency - see Currency handling - as number values. Any floating point value which may not be translated to/from JSON textual representation safely will be stored as a JSON string, i.e. if it does match an integer or up to 4 fixed decimals, with 64-bit precision. We stated that JSON serialization should be conservative, i.e. serializing then unserializing (or the other way round) should return the very same value; parsing JSON is a matter of (difficult) choices - see http://seriot.ch/parsing_json.php#5 - and we choose to be paranoid and not loose information by default.
You can use the _JsonFastFloat() wrapper or set the dvoAllowDoubleValue option to TDocVariantData, so that such floating-point numbers will be recognized and stored as double. In this case, only varDouble storage will be used for the variant values, i.e. 64-bit IEEE 754 double values, handling 5.0 x 10^-324 .. 1.7 x 10^308 range. With such floating-point values, you may loose precision and digits during the JSON serialization process: this is why it is not enabled by default.
Also note that some JSON engines do not support 64-bit integer numbers. For instance, JavaScript engines handle only up to 53-bit of information without precision loss (called the significand bits), due to their internal storage as a 8 bytes IEEE 754 container. In some cases, it is safest to use JSON string representation of such numbers, as is done with the woIDAsIDstr value of TTextWriterWriteObjectOption for safe serialization of TSQLRecord.ID ORM values.
If you want to work with high-precision floating point numbers, consider using TDecimal128 values, as implemented in SynMongoDB.pas, which supports 128-bit high precision decimal, as defined by the IEEE 754-2008 128-bit decimal floating point standard, and handled in MongoDB 3.4+. Their conversion to/from text - therefore to/from JSON - won't loose nor round any digit, as soon as the value fits in its 128-bit storage.
4.4.2.2. Object or array document creation options
As stated above, a TDocVariantOptions parameter enables to define the behavior of a TDocVariant custom type for a given instance. Please refer to the documentation of this set of options to find out the available settings. Some are related to the memory model, other to case-sensitivity of the property names, other to the behavior expected in case of non-existing property, and so on...
Note that this setting is local to the given variant instance.
In fact, TDocVariant does not force you to stick to one memory model nor a set of global options, but you can use the best pattern depending on your exact process. You can even mix the options - i.e. including some objects as properties in an object created with other options - but in this case, the initial options of the nested object will remain. So you should better use this feature with caution.
You can use the _Unique() global function to force a variant instance to have an unique set of options, and all nested documents to become by-value, or _UniqueFast() for all nested documents to become by-reference.
// assuming V1='{"name":"James","year":1972}' created by-reference_Unique(V1); // change options of V1 to be by-value
V2 := V1; // creates a full copy of the V1 instance
V2.name := 'John'; // modifies V2.name, but not V1.name
writeln(V1.name); // write 'James'
writeln(V2.name); // write 'John'
V1 := _Arr(['root',V2]); // created as by-value by default, as V2 was
writeln(V1._Count); // write 2_UniqueFast(V1); // change options of V1 to be by-reference
V2 := V1;
V1._(1).name := 'Jim';
writeln(V1);
writeln(V2);
// both commands will write '["root",{"name":"Jim","year":1972}]'
The easiest is to stick to one set of options in your code, i.e.:
Either using the _*() global functions if your business code does send some TDocVariant instances to any other part of your logic, for further storage: in this case, the by-value pattern does make sense;
Or using the _*Fast() global functions if the TDocVariant instances are local to a small part of your code, e.g. used as dynamic schema-less Data Transfer Objects (DTO).
In all cases, be aware that, like any class type, the const, var and out specifiers of method parameters does not behave to the TDocVariant value, but to its reference.
4.4.2.3. Integration with other mORMot units
In fact, whenever a dynamic schema-less storage structure is needed, you may use a TDocVariant instance instead of class or record strong-typed types:
Client-Server ORM - see below - will support TDocVariant in any of the TSQLRecord variant published properties (and store them as JSON in a text column);
Interface-based services - see below - will support TDocVariant as variant parameters of any method, which make them as perfect DTO;
Since JSON support is implemented with any TDocVariant value from the ground up, it makes a perfect fit for working with AJAX clients, in a script-like approach;
If you use our SynMongoDB.pasmORMotMongoDB.pas units to access a MongoDB server, TDocVariant will be the native storage to create or access nested BSON arrays or objects documents - that is, it will allow proper ODM storage;
Cross-cutting features (like logging or record / dynamic array enhancements) will also benefit from this TDocVariant custom type.
We are pretty convinced that when you will start playing with TDocVariant, you won't be able to live without it any more. It introduces the full power of late-binding and dynamic schema-less patterns to your application code, which can be pretty useful for prototyping or in Agile development. You do not need to use scripting engines like Python or JavaScript: Delphi is perfectly able to handle dynamic coding!
4.5. Cross-cutting functions
4.5.1. Iso8601 time and date
For date/time storage as text, the framework will use ISO 8601 encoding. Dates could be encoded as YYYY-MM-DD or YYYYMMDD, time as hh:mm:ss or hhmmss, and combined date and time representations as <date>T<time>, i.e. YYYY-MM-DDThh:mm:ss or YYYYMMDDThhmmss.
The lexicographical order of the representation thus corresponds to chronological order, except for date representations involving negative years. This allows dates to be naturally sorted by, for example, file systems, or grid lists.
4.5.1.1. TDateTime and TDateTimeMS
In addition to the default TDateTime type, which will be serialized with a second resolution, you may use TDateTimeMS, which will include the milliseconds, i.e. YYYY-MM-DDThh:mm:ss.sss or YYYYMMDDThhmmss.sss:
This integer storage is encoded as a series of bits, which will map the TTimeLogBits record type, as defined in SynCommons.pas unit.
The resolution of such values is one second. In fact, it uses internally for computation an abstract "year" of 16 months of 32 days of 32 hours of 64 minutes of 64 seconds. As a consequence, any date/time information can be retrieved from its internal bit-level representation:
0..5 bits will map seconds,
6..11 bits will map minutes,
12..16 bits will map hours,
17..21 bits will map days (minus one),
22..25 bits will map months (minus one),
26..40 bits will map years.
The ISO 8601 standard allows millisecond resolution, encoded as hh:mm:ss.sss or hhmmss.sss. Our TTimeLog/TTimeLogBits integer encoding uses a second time resolution, and a 64-bit integer storage, so is not able to handle such precision. You could use TDateTimeMS or TUnixMSTime values instead, if milliseconds are required.
Note that since TTimeLog type is bit-oriented, you can't just use add or subtract two TTimeLog values when doing such date/time computation: use a TDateTime temporary conversion in such case. See for instance how the TSQLRest.ServerTimestamp property is computed:
But if you simply want to compareTTimeLog kind of date/time, it is safe to directly compare their Int64 underlying value, since timestamps will be stored in increasing order, with a resolution of one second.
Due to compiler limitation in older versions of Delphi, direct typecast of a TTimeLog or Int64 variable into a TTimeLogBits record (as with TTimeLogBits(aTimeLog).ToDateTime) could lead to an internal compiler error. In order to circumvent this bug, you will have to use a pointer typecast, e.g. as in TimeLogBits(@Value)^.ToDateTime above. But in most case, you should better use the following functions to manage such timestamps:
See below for additional information about this TTimeLog storage, and how it is handled by the framework ORM, via the additional TModTime and TCreateTime types.
4.5.1.3. TUnixTime and TUnixMSTime
You may consider the TUnixTime type, which holds a 64-bit encoded number of seconds since the Unix Epoch, i.e. 1970-01-01 00:00:00 UTC:
using UnixTimeUTC to return the current timestamp, calling very fast OS API.
An alternative TUnixMSTime type is also available, which stores the date/time as a 64-bit encoded number of milliseconds since the Unix Epoch, i.e. 1970-01-01 00:00:00 UTC. Milliseconds resolution may be handy in some cases, especially when TTimeLog second resolution is not enough, and you want a more standard encoding than Delphi's TDateTime.
You may consider using TUnixTime and TUnixMSTime especially if the timestamp is likely to be handled by third-party clients following this C/C#/Java/JavaScript encoding. In the Delphi world, TDateTime, TDateTimeMS or TTimeLog types could be preferred.
4.5.2. Time Zones
One common problem when handling dates and times, is that common time is shown and entered as local, whereas the computer should better use non-geographic information - especially on a Client-Server architecture, where both ends may not be on the same physical region.
A time zone is a region that observes a uniform standard time for legal, commercial, and social purposes. Time zones tend to follow the boundaries of countries and their subdivisions because it is convenient for areas in close commercial or other communication to keep the same time. Most of the time zones on land are offset from Coordinated Universal Time (UTC) by a whole number of hours, or minutes. Even worse, some countries use daylight saving time for part of the year, typically by changing clocks by an hour, twice every year.
The main rule is that any date and time stored should be stored in UTC, or with an explicit Zone identifier (i.e. an explicit offset to the UTC value). Our framework expects this behavior: every date/time value stored and handled by the ORM, SOA, or any other part of it, is expected to be UTC-encoded. At presentation layer (e.g. the User Interface), conversion to/from local times should take place, so that the end-user is provided with friendly clock-wall compatible timing.
As you may guess, handling time zones is a complex task, which should be managed by the Operating System itself. Since this cultural material is constantly involving, it is updated as part of the OS.
In practice, current local time could be converted from UTC from the current system-wide time zone. One of the only parameters you have to set when installing an Operating System is to pickup the keyboard layout... and the current time zone to be used. But in a client-server environment, you may have to manage several time zones on the server side: so you can't rely on this global setting.
One sad - but predictable - disappointment is that there is no common way of encoding time zone information. Under Windows, the registry contains a list of time zones, and the associated time bias data. Most POSIX systems (including Linux and Mac OSX) do rely on the IANA database, also called tzdata - you may have noticed that this particular package is often updated with your system. Both zone identifiers do not map, so our framework needed something to be shared on all systems.
The SynCommons.pas unit features the TSynTimeZone class, which is able to retrieve the information from the Windows registry into memory via TSynTimeZone.LoadFromRegistry, or into a compressed file via TSynTimeZone.SaveToFile. Later on, this file could be reloaded on any system, including any Linux flavor, via TSynTimeZone.LoadFromFile, and returns the very same results. The compressed file is pretty small, thanks to its optimized layout, and use of our SynLZ compression algorithm: the full information is stored in a 7 KB file - the same flattened information as JSON is around 130 KB, and you may compare with the official http://www.iana.org content, which weighted as a 280KB tar.gz... Of course, tzdata stores potentially a lot more information than we need.
In practice, you may use TSynTimeZone.Default, which will return an instance read from the current version of the registry under Windows, and will attempt to load the information named after the executable file name (appended as a .tz extension) on other Operating Systems. You may therefore write:
You will have to create the needed .tz compressed file under a Windows machine, then provide this file together with any Linux server executable, in its very same folder. On a Cloud-like system, you may store this information in a centralized server, e.g. via a dedicated service - see below - generated from a single reference Windows system via TSynTimeZone.SaveToBuffer, and later on use TSynTimeZone.LoadFromBuffer to decode it from all your cloud nodes. The main benefit is that the time information will stay consistent whatever system it runs on, as you may expect.
Your User Interface could retrieve the IDs and ready to be displayed text from TSynTimeZone.Ids and TSynTimeZone.Displays properties, as plain TStrings instance, which index will follow the TSynTimeZone.Zone[] internal information.
As a nice side effect, the TSynTimeZone binary internal storage has been found out to be very efficient, and much faster than a manual reading of the Windows registry. Complex local time calculation could be done on the server side, with no fear of breaking down your processing performances.
4.5.3. Safe locks for multi-thread applications
4.5.3.1. Protect your resources
Once your application is multi-threaded, concurrent data access should be protected. Otherwise, a "race condition" issue may appear: for instance, if two threads modify a variable at the same time (e.g. decrease a counter), values may become incoherent and unsafe to use. The most known symptom is the "deadlock", by which the whole application appears to be blocked and unresponsive. On a server system, which is expected to run 24/7 with no maintenance, such an issue is to be avoided.
In Delphi, protection of a resource (which may be an object, or any variable) is usually done via Critical Sections. A critical section is an object used to make sure, that some part of the code is executed only by one thread at a time. A critical section needs to be created/initialized before it can be used and be released when it is not needed anymore. Then, some code is protected using Enter/Leave methods, which will lock its execution: in practice, only a single thread will own the critical section, so only a single thread will be able to execute this code section, and other threads will wait until the lock is released. For best performance, the protected sections should be as small as possible - otherwise the benefit of using threads may be voided, since any other thread will wait for the thread owning the critical section to release the lock.
4.5.3.2. Fixing TRTLCriticalSection
In practice, you may use a TCriticalSection class, or the lower-level TRTLCriticalSection record, which is perhaps to be preferred, since it will use less memory, and could easily be included as a (protected) field to any class definition.
Let's say we want to protect any access to the variables a and b. Here's how to do it with the critical sections approach:
var CS: TRTLCriticalSection;
a, b: integer;
// set before the threads start
InitializeCriticalSection(CS);
// in each TThread.Execute:
EnterCriticalSection(CS);
try// protect the lock via a try ... finally block// from now on, you can safely make changes to the variables
inc(a);
inc(b);
finally// end of safe block
LeaveCriticalSection(CS);
end;
// when the threads stop
DeleteCriticalSection(CS);
In newest versions of Delphi, you may use a TMonitor class, which will let the lock be owned by any Delphi TObject. Before XE5, there was some performance issue, and even now, this Java-inspired feature may not be the best approach, since it is tied to a single object, and is not compatible with older versions of Delphi (or FPC).
Eric Grange reported some years ago - see https://www.delphitools.info/2011/11/30/fixing-tcriticalsection - that TRTLCriticalSection (along with TMonitor) suffers from a severe design flaw in which entering/leaving different critical sections can end up serializing your threads, and the whole can even end up performing worse than if your threads had been serialized. This is because it's a small, dynamically allocated object, so several TRTLCriticalSection memory can end up in the same CPU cache line, and when that happens, you'll have cache conflicts aplenty between the cores running the threads.
The fix proposed by Eric is dead simple:
type
TFixedCriticalSection = class(TCriticalSection)
private
FDummy: array [0..95] of Byte;
end;
4.5.3.3. Introducing TSynLocker
Since we wanted to use a TRTLCriticalSection record instead of a TCriticalSection class instance, we defined a TSynLocker record in SynCommons.pas:
TSynLocker = recordprivate
fSection: TRTLCriticalSection;
public
Padding: array[0..6] of TVarData;
procedure Init;
procedure Done;
procedure Lock;
procedure UnLock;
end;
As you can see, the Padding[] array will ensure that the CPU cache-line issue won't affect our object.
TSynLocker use is close to TRTLCriticalSection, with some method-oriented behavior:
var safe: TSynLocker;
a, b: integer;
// set before the threads start
safe.Init;
// in each TThread.Execute:
safe.Lock
try// protect the lock via a try ... finally block// from now on, you can safely make changes to the variables
inc(a);
inc(b);
finally// end of safe block
safe.Unlock;
end;
// when the threads stop
safe.Done;
If your purpose is to protect a method execution, you may use the TSynLocker.ProtectMethod function or explicit Lock/Unlock, as such:
type
TMyClass = classprotected
fSafe: TSynLocker;
fField: integer;
publicconstructor Create;
destructor Destroy; override;
procedure UseLockUnlock;
procedure UseProtectMethod;
end; { TMyClass } constructor TMyClass.Create;
beginfSafe.Init; // we need to initialize the lockend; destructor TMyClass.Destroy;
beginfSafe.Done; // finalize the lockinherited;
end; procedure TMyClass.UseLockUnlock;
beginfSafe.Lock;try// now we can safely access any protected field from multiple threads
inc(fField);
finallyfSafe.UnLock;end;
end; procedure TMyClass.UseProtectMethod;
beginfSafe.ProtectMethod; // calls fSafe.Lock and return IUnknown local instance// now we can safely access any protected field from multiple threads
inc(fField);
// here fSafe.UnLock will be called when IUnknown is releasedend;
4.5.3.4. Inheriting from T*Locked
For your own classes definition, you may inherit from some classes providing a TSynLocker instance, as defined in SynCommons.pas:
All those classes will initialize and finalize their owned Safe instance, in their constructor/destructor.
So, we may have written our class as such:
type
TMyClass = class(TSynPersistentLocked)
protected
fField: integer;
publicprocedure UseLockUnlock;
procedure UseProtectMethod;
end; { TMyClass } procedure TMyClass.UseLockUnlock;
begin
fSafe.Lock;
try// now we can safely access any protected field from multiple threads
inc(fField);
finally
fSafe.UnLock;
end;
end; procedure TMyClass.UseProtectMethod;
begin
fSafe.ProtectMethod; // calls fSafe.Lock and return IUnknown local instance// now we can safely access any protected field from multiple threads
inc(fField);
// here fSafe.UnLock will be called when IUnknown is releasedend;
Inheriting from a TSynPersistentLocked class (or one of its sibbling) only gives you access to a single TSynLocker per instance. If your class inherits from TSynAutoCreateFields, you may create one or several TAutoLocker published properties, which will be auto-created with the instance:
type
TMyClass = class(TSynAutoCreateFields)
protected
fLock: TAutoLocker;
fField: integer;
publicfunction FieldValue: integer;
publishedproperty Lock: TAutoLockerread fLock;
end; { TMyClass } function TMyClass.FieldValue: integer;
begin
fLock.ProtectMethod;
result := fField;
inc(fField);
end; var c: TMyClass;
begin
c := TMyClass.Create;
Assert(c.FieldValue=0);
Assert(c.FieldValue=1);
c.Free;
end.
In practice, TSynAutoCreateFields is a very powerful way of defining Value objects, i.e. objects containing nested objects or even arrays of objects. You may use its ability to create the needed TAutoLocker instances in an automated way. But be aware that if you serialize such an instance into JSON, its nested TAutoLocker properties will be serialized as void properties - which may not be the expected result.
4.5.3.6. Injecting IAutoLocker instances
If your class inherits from TInjectableObject, you may define the following:
type
TMyClass = class(TInjectableObject)
private
fLock: IAutoLocker;
fField: integer;
publicfunction FieldValue: integer;
publishedproperty Lock: IAutoLockerread fLock write fLock;
end; { TMyClass } function TMyClass.FieldValue: integer;
begin
Lock.ProtectMethod;
result := fField;
inc(fField);
end; var c: TMyClass;
begin
c := TMyClass.CreateInjected([],[],[]);
Assert(c.FieldValue=0);
Assert(c.FieldValue=1);
c.Free;
end;
Here we use dependency resolution - see below - to let the TMyClass.CreateInjected constructor scan its published properties, and therefore search for a provider of IAutoLocker. Since IAutoLocker is globally registered to be resolved with TAutoLocker, our class will initialize its fLock field with a new instance. Now we could use Lock.ProtectMethod to use the associated TAutoLocker's TSynLocker critical section, as usual.
Of course, this may sounds more complicated than manual TSynLocker handling, but if you are writing an interface-based service - see below, your class may already inherit from TInjectableObject for its own dependency resolution, so this trick may be very convenient.
4.5.3.7. Safe locked storage in TSynLocker
When we fixed the potential CPU cache-line issue, do you remember that we added a padding binary buffer to the TSynLocker definition? Since we do not want to waste resource, TSynLocker gives easy access to its internal data, and allow to directly handle those values. Since it is stored as 7 slots of variant values, you could store any kind of data, including complex TDocVariant document or array.
Our class may use this feature, and store its integer field value in the internal slot 0:
type
TMyClass = class(TSynPersistentLocked)
publicprocedure UseInternalIncrement;
function FieldValue: integer;
end; { TMyClass } function TMyClass.FieldValue: integer;
begin// value read will also be protected by the mutex
result := fSafe.LockedInt64[0];
end; procedure TMyClass.UseInternalIncrement;
begin// this dedicated method will ensure an atomic increase
fSafe.LockedInt64Increment(0,1);
end;
procedure TMyClass.UseInternalIncrement;
begin
fSafe.LockedInt64[0] := fSafe.LockedInt64[0]+1;
end;
In the above line, two locks are acquired (one per LockedInt64 property call), so another thread may modify the value in-between, and the increment may not be as accurate as expected.
TSynLocker offers some dedicated properties and methods to handle this safe storage. Those expect an Index value, from 0..6 range:
You may store a pointer or a reference to a TObject instance, if necessary.
Having such a tool-set of thread-safe methods does make sense, in the context of our framework, which offers multi-thread server abilities - see below.
4.5.3.8. Thread-safe TSynDictionary
Remember that the TSynDictionary class is thread-safe. In fact, the TSynDictionary methods are protected by a TSynLocker instance, and internal Count or TimeOuts values are actually stored within its 7 locked storage slots.
You may consider defining TSynDictionary instances in your business logic, or in the public API layer of your services, with proper thread safety - see below.
Generic access to the data is implemented by defining high-level objects as Delphi classes, descendant from a main TSQLRecord class.
In our Client-Server ORM, those TSQLRecord classes can be used for at least three main purposes:
To store and retrieve data from any database engine - for most common usage, you can forget about writing SQL queries: CRUD data access statements (SELECT / INSERT / UPDATE /DELETE) are all created on the fly by the Object-relational mapping (ORM) core of mORMot - see below - a NoSQL engine like MongoDB can even be accessed the same way - see below;
To have business logic objects accessible for both the Client and Server side, in a RESTful approach - see below;
To fill a grid content with the proper field type (e.g. grid column names are retrieved from property names after translation, enumerations are displayed as plain text, or boolean as a checkbox); to create menus and reports directly from the field definition; to have edition window generated in an automated way - see below.
Our ORM engine has genuine advanced features like convention over configuration, integrated security, local or remote access, REST JSON publishing (for AJAX or mobile clients), direct access to the database (by-passing slow DB.pas unit), content in-memory cache, optional audit-trail (change tracking), and integration with other parts of the framework (like SOA, logging, authentication...).
5.1. TSQLRecord fields definition
All the framework ORM process relies on the TSQLRecord class. This abstract TSQLRecord class features a lot of built-in methods, convenient to do most of the ORM process in a generic way, at record level.
It first defines a primary key field, defined as ID: TID, i.e. as Int64 in mORMot.pas:
In fact, our ORM relies on a Int64 primary key, matching the SQLite3ID/RowID primary key.
You may be disappointed by this limitation, which is needed by the SQLite3's implementation of Virtual Tables - see below. We won't debate about a composite primary key (i.e. several fields), which is not a good idea for an ORM. In your previous RDBMS data modeling, you may be used to define a TEXT primary key, or even a GUID primary key: those kinds of keys are somewhat less efficient than an INTEGER, especially for ORM internals, since they are not monotonic. You can always define a secondary key, as string or TGUID field, if needed - using stored AS_UNIQUE attribute as explained below.
All published properties of the TSQLRecord descendant classes are then accessed via RTTI in a Client-Server RESTful architecture.
For example, a database Baby Table is defined in Delphi code as:
/// some enumeration// - will be written as 'Female' or 'Male' in our UI Grid// - will be stored as its ordinal value, i.e. 0 for sFemale, 1 for sMale// - as you can see, ladies come first, here
TSex = (sFemale, sMale); /// table used for the Babies queries
TSQLBaby = class(TSQLRecord)
private
fName: RawUTF8;
fAddress: RawUTF8;
fBirthDate: TDateTime;
fSex: TSex;
publishedproperty Name: RawUTF8read fName write fName;
property Address: RawUTF8read fAddress write fAddress;
property BirthDate: TDateTime read fBirthDate write fBirthDate;
property Sex: TSex read fSex write fSex;
end;
By adding this TSQLBaby class to a TSQLModel instance, common for both Client and Server, the corresponding Baby table is created by the Framework in the database engine (SQLite3 natively or any external database). All SQL work ('CREATE TABLE ...') is done by the framework. Just code in Pascal, and all is done for you. Even the needed indexes will be created by the ORM. And you won't miss any ' or ; in your SQL query any more.
The following published properties types are handled by the ORM, and will be converted as specified to database content (in SQLite3, an INTEGER is an Int64, FLOAT is a double, TEXT is an UTF-8 encoded text):
Delphi
SQLite3
Remarks
byte
INTEGER
word
INTEGER
integer
INTEGER
cardinal
INTEGER
Int64
INTEGER
boolean
INTEGER
0 is false, anything else is true
enumeration
INTEGER
store the ordinal value of the enumerated item(i.e. starting at 0 for the first element)
set
INTEGER
each bit corresponding to an enumerated item (therefore a set of up to 64 elements can be stored in such a field)
single
FLOAT
double
FLOAT
extended
FLOAT
stored as double (precision lost)
currency
FLOAT
safely converted to/from currency type with fixed decimals, without rounding error
32-bit RowID pointing to another record (warning: the field value contains pointer(RowID), not a valid object instance - the record content must be retrieved with late-binding via its ID using a PtrInt(Field) typecast or the Field.ID method), or by using e.g. CreateJoined() - 64-bit under Win64
able to join any row on any table of the model, by storing both ID and TSQLRecord class type in a RecordRef-like Int64 value, with automatic reset to 0 (for TRecordReference) or row deletion (for TRecordReferenceToBeDeleted) when the pointed record is deleted
JSON string or object, directly handled since Delphi XE5, or as defined in code by overriding TSQLRecord.InternalRegisterCustomProperties for prior versions
64-bit revision number, which will be monotonically updated each time the object is modified, to allow remote synchronization - see below
5.1.1. Property Attributes
Some additional attributes may be added to the published field definitions:
If the property is marked as stored AS_UNIQUE (i.e. stored false), it will be created as UNIQUE in the database (i.e. a SQL index will be created and uniqueness of the value will be checked at insert/update);
For a dynamic array field, the index number can be used for the TSQLRecord. DynArray(DynArrayFieldIndex) method to create a TDynArray wrapper mapping the dynamic array data;
For a RawUTF8 / string / WideString / WinAnsiString field of an "external" class - i.e. a TEXT field stored in a remote SynDB.pas-based database - see below, the index number will be used to define the maximum character size of this field, when creating the corresponding column in the database (SQLite3 or PostgreSQL does not have any such size expectations).
For instance, the following class definition will create an index for its SerialNumber property (up to 30 characters long if stored in an external database), and will expect a link to a model of diaper (TSQLDiaperModel) and the baby which used it (TSQLBaby). An ID / RowID column will be always available (from TSQLRecord), so in this case, you will be able to make a fast lookup for a particular diaper from either its internal mORmot ID, or its official unique serial number:
Note that TTNullableUTF8Text kind of property will follow the same index ### attribute interpretation.
5.1.2. Text fields
In practice, the generic string type is handled (as UnicodeString under Delphi 2009 and later), but you may loose some content if you're working with pre-Unicode version of Delphi (in which string = AnsiString with the current system code page). So we won't recommend its usage.
The natural Delphi type to be used for TEXT storage in our framework is RawUTF8 as introduced for Unicode and UTF-8. All business process should better use RawUTF8 variables and methods (you have all necessary functions in SynCommons.pas), then you should explicitly convert the RawUTF8 content into a string using U2S / S2U from mORMoti18n.pas or StringToUTF8 / UTF8ToString which will handle proper char-set conversion according to the current i18n settings. On Unicode version of Delphi (starting with Delphi 2009), you can directly assign a string / UnicodeString value to / from a RawUTF8, but this implicit conversion will be slightly slower than our StringToUTF8 / UTF8ToString functions. With pre-Unicode version of Delphi (up to Delphi 2007), such direct assignation will probably loose data for all non ASCII 7 bit characters, so an explicit call to StringToUTF8 / UTF8ToString functions is required.
You will find in SynCommons.pas unit all low-level RawUTF8 processing functions and classes, to be used instead of any SysUtils.pas functions. The mORMot core implementation about RawUTF8 is very optimized for speed and multi-threading, so it is recommended not to use string in your code, unless you access to the VCL / User Interface layer.
Having such a dedicated RawUTF8 type will also ensure that you are not leaking your domain from its business layer to the presentation layer, as defined with Multi-tier architecture:
Strings in Domain Driven Design n-Tier ArchitectureFor additional information about UTF-8 handling in the framework, see Unicode and UTF-8.
5.1.3. Date and time fields
DelphiTDateTime and TDateTimeMS properties will be stored as ISO 8601 text in the database, with seconds and milliseconds resolution. See Iso8601 time and date for details about this text encoding.
This format will be very fast for comparing dates or convert into/from text, and will be stored as INTEGER in the database, therefore more efficiently than plain ISO 8601 text as for TDateTime fields.
In practice, TModTime and TCreateTime values are inter-exchangeable with TTimeLog. They are just handled with a special care by the ORM, so that their associated field value will be updated with the current UTC timestamp, for every TSQLRecord modification (for TModTime), or at entry creation (for TCreateTime). The time value stored is in fact the UTC timestamp, as returned from the current REST Server: in fact, when any REST client perform a connection, it will retrieve any time offset from the REST Server, which will be used to store a consistent time value across all Clients.
You may also define a TUnixTime property, which will store the number of seconds since 1970-01-01 00:00:00 UTC as INTEGER in the database, and serialized as 64-bit JSON number - or TUnixMSTime if you expect milliseconds resolution. This encoding has the benefit of being handled by SQlite3 date/time functions, and interoperable with most third-party languages.
5.1.4. TSessionUserID field
If you define a TSessionUserID published property, this field will be automatically filled at creation or modification of the TSQLRecord with the current TSQLAuthUser.ID value of the active session. If no session has been initialized from the client side, 0 will be stored.
By design, and similar to TModTime fields, you should use the ORM PUT/POST CRUD methods to compute this field value: manual SQL statements (like UPDATE Table SET Column=0) won't set its content. Also, it is up to the client to fill the TSessionUserID fields before sending their content to the server - the Delphi and cross-platform ORM clients will perform this assignment.
5.1.5. Enumeration fields
Enumerations should be mapped as INTEGER, i.e. via ord(aEnumValue) or TEnum(aIntegerValue).
Enumeration sets should be mapped as INTEGER, with byte/word/integer type, according to the number of elements in the set: for instance, byte(aSetValue) for up to 8 elements, word(aSetValue) for up to 16 elements, and integer(aSetValue) for up to 32 elements in the set.
5.1.6. Floating point and Currency fields
For standard floating-point values, the framework natively handles the double and currency kind of variables.
In fact, double is the native type handled by most database providers - it is also native to the SSE set of opcodes of newer CPUs (as handled by Delphi XE 2 in 64-bit mode). Lack of extended should not be problematic (if it is mandatory, a dedicated set of mathematical classes should be preferred to a database), and could be implemented with the expected precision via a TEXT field (or a BLOB mapped by a dynamic array).
The currency type is the standard Delphi type to be used when storing and handling monetary values, native to the x87 FPU - when it comes to money, a dedicated type is worth the cost in a "rich man's world". It will avoid any rounding problems, assuming exact 4 decimals precision. It is able to safely store numbers in the range -922337203685477.5808 .. 922337203685477.5807. Should be enough for your pocket change.
As stated by the official Delphi documentation:
Currency is a fixed-point data type that minimizes rounding errors in monetary calculations. On the Win32 platform, it is stored as a scaled 64-bit integer with the four least significant digits implicitly representing decimal places. When mixed with other real types in assignments and expressions, Currency values are automatically divided or multiplied by 10000.
In fact, this type matches the corresponding OLE and .Net implementation of currency. It is still implemented the same in the Win64 platform (since XE 2). The Int64 binary representation of the currency type (i.e. value*10000 as accessible via a typecast like PInt64(@aCurrencyValue)^) is a safe and fast implementation pattern.
In our framework, we tried to avoid any unnecessary conversion to float values when dealing with currency values. Some dedicated functions have been implemented - see Currency handling - for fast and secure access to currency published properties via RTTI, especially when converting values to or from JSON text. Using the Int64 binary representation can be not only faster, but also safer: you will avoid any rounding problem which may be introduced by the conversion to a float type. For all database process, especially with external engines, the SynDB.pas units will try to avoid any conversion to/from double for the dedicated ftCurrency columns. Rounding issues are a nightmare to track in production - it sounds safe to have a framework handling natively a currency type from the ground up.
5.1.7. TSQLRecord fields
It is worth saying that TSQLRecord published properties are not by default class instances, as with regular Delphi code. After running TSQLRecord.Create() or CreateAndFillPrepare() constructors, you should never call aMyRecord.AnotherRecord.Property directly, or you will raise an Access Violation.
In fact, TSQLRecord published properties definition is used to define "one to many" or "one to one" relationships between tables. As a consequence, the nested AnotherRecord property won't be a true class instance, but one ID trans-typed as TSQLRecord.
Only exception to this rule is TSQLRecordMany kind of published properties, which, by design, are true instances, needed to access the pivot table data of "many to many" relationship. The ORM will auto-instantiate all TSQLRecordMany published properties, then release them at Destroy - so you do not need to maintain their life time.
Note that you may use e.g. TSQLRecord.CreateJoined() constructor to auto-instantiate and load all TSQLRecord published properties at once, then release them at Destroy. - see below.
The ORM will automatically perform the following optimizations for TSQLRecord published fields:
An index will be created on the database, for the corresponding column;
When a referenced record is deleted, the ORM will detect it and automatically set all published properties pointing to this record to 0.
In fact, the ORM won't define a ON DELETE SET DEFAULT foreign key via SQL: this feature won't be implemented at RDBMS level, but emulated at ORM level.
See below for more details about how to work with TSQLRecord published properties.
5.1.8. TID fields
TSQLRecord published properties do match a class instance pointer, so are 32-bit (at least for Win32/Linux32 executables). Since the TSQLRecord.ID field is declared as TID = Int64, we may loose information if the stored ID is greater than 2,147,483,647 (i.e. a signed 32-bit value).
You can define a published property as TID to store any value of our primary key, i.e. up to 9,223,372,036,854,775,808. Note that in this case, there is no information about the joined table.
As a consequence, the ORM will perform the following optimizations for TID fields:
An index will be created on the database, for the corresponding column;
When a referenced record is deleted, the ORM won't do anything, since it has no information about the table to track - this is the main difference with TSQLRecord published property.
You can optionally specify the associated table, using a custom TID type for the published property definition. In this case, you will sub-class TID, using tableNameID as naming convention. For instance, if you define:
Those three published fields will be able to store a Int64 foreign key, and the ORM will ensure a corresponding index is created on the database, to speedup search on their values. But their type - TID, TSQLRecordClientID, or TSQLRecordClientToBeDeletedID - will define how the deletion process will be processed.
By using the generic TID type, the first Client property won't have any reference to any table, so no deletion tracking will take place.
On the other hand, following the type naming convention, the others OrderedBy and OrderedByCascade properties will be associated with the TSQLRecordClient table of the data model. In fact, the ORM will retrieve the 'TSQLRecordClientID' or 'TSQLRecordClientToBeDeletedID' type names, and search for a TSQLRecord associated by trimming *[ToBeDeleted]ID, which is TSQLRecordClient in this case. As a result, the ORM will be able to track any TSQLRecordClient deletion: for any row pointing to the deleted record, it will ensure that this OrderedBy property will be reset to 0, or that the row containing the OrderedByCascade property will be deleted. Note that the framework won't define a ON DELETE SET DEFAULT or ON DELETE CASCADE foreign key via SQL, but emulate them at ORM level.
5.1.9. TRecordReference and TRecordReferenceToBeDeleted
In fact, such properties will store in a Int64 value a reference to both a TSQLRecord class (therefore defining a table), and one ID (to define the row).
You could later on use e.g. TSQLRest.Retrieve(Reference) to get a record content in one step.
One important note is to remember that the table reference is stored as an index to the TSQLRecord class in the associated TSQLModel. As a consequence, for such TRecordReference* properties to work as expected, you should ensure:
That the order of TSQLRecord classes in the TSQLModeldo not change after any model modification: otherwise, all previously stored TRecordReference* values may point to a wrong record;
That both Client and Server side share the same model - at least for the TSQLRecord classes which are used with TRecordReference*.
Depending on the type, the ORM will track the deletion of the pointed record:
TRecordReference fields will be reset to 0 - emulating ON DELETE SET DEFAULT foreign key SQL declaration;
TRecordReferenceToBeDeleted will delete the whole record - emulating ON DELETE CASCADE foreign key SQL declaration.
Just like with TSQLRecord or TSQLRecordClassName[ToBeDeleted]ID fields, this deletion tracking is not defined at RDBMS level, but emulated at ORM level.
In order to work easily with TRecordReference values (which are in fact plain Int64 values), you could transtype them into the RecordRef() record, and access the stored information via a set of helper methods. See below for an example of use of such TRecordReference in a data model, e.g. the AssociatedRecord property of TSQLAuditTrail.
It is worth saying that this deletion tracking is not defined at RDBMS level, but at ORM level. As a consequence, it will work with any kind of databases, including NoSQL and Object-Document Mapping (ODM). In fact, RDBMS engines do not allow defining such ON DELETE trigger on several tables, whereas mORMot handles such composite references as expected for TRecordReference. Since this is not a database level tracking, but only from a mORMot server, if you still use the database directly from legacy code, ensure that you will take care of this tracking, perhaps by using a SOA service instead of direct SQL statements.
5.1.11. Variant fields
The ORM will store variant fields as TEXT in the database, serialized as JSON.
At loading, it will check their content:
If some custom variant types are registered (e.g. MongoDB custom objects), they will be recognized as such (with extended syntax, if applying);
It will create a numerical value (integer or double) if the stored text has the corresponding layout;
Otherwise, it will create a string value.
Since all data is stored as TEXT in the column, your queries shall ensure that any SQL WHERE statement handles it as expected (e.g. with a conversion to number before comparison). Even if SQLite3 is able to affect a column type for each row (i.e. store a variant as in Delphi code), we did not use this feature, since we wanted our framework to work with all databases - and SQLite3 is quite alone having this feature.
At JSON level, variant fields will be transmitted as JSON text or number, depending on the stored value.
If you use a MongoDB external NoSQL database - see below, such variant field will not be stored as JSON text, but as true BSON documents. So you will be able to apply all the advanced search and indexing abilities of this database engine, if needed.
5.1.12. Record fields
Since Delphi XE5, you can define and work directly with published record properties of TSQLRecord:
The record will be serialized as JSON - here TGUID will be serialized as a JSON string - then will be stored as TEXT column in the database. We specified an index 38 attribute to state that this column will contain up to 38 characters, when stored on an external database - see below.
Published properties of records are handled by our code, but Delphi doesn't create the corresponding RTTI for such properties before Delphi XE5. So record published properties, as defined in the above class definition, won't work directly for older versions of Delphi, or FreePascal.
You could use a dynamic array with only one element, in order to handle records within your TSQLRecord class definition - see below. But it may be confusing.
If you want to work with such properties before Delphi XE5, you can override the TSQLRecord.InternalRegisterCustomProperties() virtual method of a given table, to explicitly define a record property.
For instance, to register a GUID property mapping a TSQLMyRecord.fGUID: TGUID field:
You may call Props.RegisterCustomPropertyFromRTTI(), supplying the TypeInfo() pointer, for a record containing reference-counted fields like string, variant or nested dynamic arrays. Of course, any custom JSON serialization of the given record type - see below - will be supported.
Those custom record registration methods will define either:
TEXT serialization, for RegisterCustomPropertyFromRTTI() or RegisterCustomPropertyFromTypeName();
BLOB serialization, for RegisterCustomRTTIRecordProperty() or RegisterCustomFixedSizeRecordProperty().
5.1.13. BLOB fields
In fact, several kind of properties will be stored as BLOB in the database backend:
TSQLRawBlob properties are how you store your binary data, e.g. images or documents;
record which were explicitly registered as BLOB columns.
By default, both dynamic arrays and BLOB record content will be retrieved from the database, encoded as Base64 text.
But TSQLRawBlob properties will be transmitted as RESTful separate resources, as required by the REST scheme. For instance, it means that a first request will retrieve all "simple" fields as JSON, then some other requests are needed to retrieve each BLOB fields as a binary buffer. As a result, TSQLRawBlob won't be transmitted by default, to spare transmission bandwidth and resources.
You can change this default behavior, by setting:
Either TSQLRestClientURI.ForceBlobTransfert: boolean property, to force the transfert of all BLOBs of all the tables of the data model - this is what is done e.g. for the SynFile main demo - see later in this document;
Or via TSQLRestClientURI.TSQLRestClientURI.ForceBlobTransfertTable[] property, for a specified table of the model.
5.1.14. TNullable* fields for NULL storage
In Delphi, nullable types do not exist, as they do for instance in C#, via the int? kind of definition. But at SQL and JSON levels, the NULL values do exist and are expected to be available from our ORM.
In SQLite3 itself, NULL is handled as stated in http://www.sqlite.org/lang_expr.html (see e.g. IS and IS NOT operators). It is worth noting that NULL handling is not consistent among all existing database engines, e.g. when you are comparing NULL with non NULL values... so we recommend using it with care in any database statements, or only with proper (unit) testing, when you switch from one database engine to another.
By default, in the mORMot ORM/SQL code, NULL will appear only in case of a BLOB storage with a size of 0 bytes. Otherwise, you should not see it as a value, in most used types - see TSQLRecord fields definition.
Null-oriented value types have been implemented in our framework, since the object pascal language does not allow defining a nullable type (yet). We choose to store those values as variant, with a set of TNullable* dedicated types, as defined in mORMot.pas:
Such a class will let the ORM handle SQL NULL values as expected, i.e. returning a null variant value, or an integer/number/text value if there is something stored. Of course, the corresponding column in the database will have the expected data type, e.g. a NULLABLE INTEGER for TNullableInteger property.
Note that TNullableUTF8Text is defined as a RawUTF8 usual field - see Property Attributes. That is, without any size limitation by default (as for the CLOB property), or with an explicit size limitation using the index ### attribute (as for Text property, which will be converted as a VARCHAR(32) SQL column).
You could use the following wrapper functions to create a TNullable* value from any non-nullable standard Delphi value:
Some corresponding constants do match the expected null value for each kind, with strong typing (to be used for FPC compatibility, which does not allow direct assignment of a plain null: variant to a TNullable* = type variant property):
Those Nullable*ToValue() functions are mandatory for use under FPC, which does not allow mixing plain variant values and specialized TNullable* = type variant values.
Thanks to those types, and their corresponding wrapper functions, you have at hand everything needed to safely store some nullable values into your application database, with proper handling on Delphi side.
5.2. Working with Objects
To access a particular record, the following code can be used to handle CRUD statements (Create Retrieve Update Delete actions are implemented via Add/Update/Delete/Retrieve methods), following the RESTful pattern - see below, and using the IDprimary key as resource identifier:
procedure Test(Client: TSQLRest); // we will use CRUD operations on a REST instancevar Baby: TSQLBaby; // store a record
ID: TID; // store a reference to a recordbegin// create and save a new record, since Smith, Jr was just born
Baby := TSQLBaby.Create;
try
Baby.Name := 'Smith';
Baby.Address := 'New York City';
Baby.BirthDate := Date;
Baby.Sex := sMale;
ID := Client.Add(Baby,true);finally
Baby.Free; // manage memory as usualend;
// update record dataBaby := TSQLBaby.Create(Client,ID); // retrieve from IDtry
assert(Baby.Name='Smith');
Baby.Name := 'Smeeth';
Client.Update(Baby);finally
Baby.Free;
end;
// retrieve record data
Baby := TSQLBaby.Create;
tryClient.Retrieve(ID,Baby);// we may have written: Baby := TSQLBaby.Create(Client,ID);
assert(Baby.Name='Smeeth');
finally
Baby.Free;
end;
// delete the created recordClient.Delete(TSQLBaby,ID);end;
Of course, you can have a TSQLBaby instance alive during a longer time. The same TSQLBaby instance can be used to access several record content, and call Retrieve / Add / Delete / Update methods on purpose.
No SQL statement to write, nothing to care about database engine expectations (e.g. for date or numbers processing): just accessing objects via high-level methods. It could even work with NoSQL databases, like a fast TObjectList or MongoDB. This is the magic of ORM.
To be honest, the REST pattern does not match directly the CRUD operations exactly. We had to tied a little bit the REST verbs - as defined below - to fit our ORM purpose. But all you have to know is that those Add/Update/Delete/Retrieve methods are able to define the full persistence lifetime of your precious objects.
5.3. Queries
5.3.1. Return a list of objects
You can query your table with the FillPrepare or CreateAndFillPrepare methods, for instance all babies with balls and a name starting with the letter 'A':
var aMale: TSQLBaby;
...
aMale := TSQLBaby.CreateAndFillPrepare(Client,'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]);trywhile aMale.FillOne do
DoSomethingWith(aMale);
finally
aMale.Free;
end;
This request loops through all matching records, accessing each row content via a TSQLBaby instance.
The mORMot engine will create a SQL statement with the appropriate SELECT query, retrieve all data as JSON, transmit it between the Client and the Server (if any), then convert the values into properties of our TSQLBaby object instance. Internally, the [CreateAnd]FillPrepare / FillOne methods use a list of records, retrieved as JSON from the Server, and parsed in memory one row a time (using an internal TSQLTableJSON instance).
Note that there is an optional aCustomFieldsCSV parameter available in all FillPrepare / CreateAndFillPrepare methods, by which you may specify a CSV list of field names to be retrieved. It may save some remote bandwidth, if not all record fields values are needed in the loop. Note that you should use this aCustomFieldsCSV parameter only to retrieve some data, and that the other fields will remain untouched (i.e. void in case of CreateAndFillPrepare): any later call to Update should lead into a data loss, since the method will know that is has been called during a FillPrepare / CreateAndFillPrepare process, and only the retrieved filled will be updated on the server side.
You could also create a TObjectList, or - even better for newer versions of Delphi supporting the generics syntax - a TObjectList<T> instance to retrieve all values of a table:
var aList: TObjectList<TSQLBaby>;
aMale: TSQLBaby;
...
aList := Client.RetrieveList<TSQLBaby>('Name LIKE ? AND Sex = ?',['A%',ord(sMale)]);tryfor aMale in aList do
DoSomethingWith(aMale);
finally
aList.Free;
end;
Note that this method will use more memory and resources than a *FillPrepare call followed by a while ...FillOne do loop, since the later will only allocate one instance of the TSQLRecord, then fill the properties of this single instance directly from the returned JSON content, one at a time. For huge lists, or in multi-threaded environement, it may make a difference. But the generics syntax can make cleaner code, or more integrated with your business logic.
5.3.2. Query parameters
For safer and faster database process, the WHERE clause of the request expects some parameters to be specified. They are bound in the ? appearance order in the WHERE clause of the [CreateAnd]FillPrepare query method.
Standard simple kind of parameters (RawUTF8, integer, double, currency..) can be bound directly - as in the sample code above for Name or Sex properties. The first parameter will be bound as 'A%' RawUTF8 TEXT, and the second as the 1 INTEGER value.
As stated previously, BLOB (i.e. sftBlob or TSQLRawBlob) properties are handled separately, via dedicated RetrieveBlob and UpdateBlob method calls (or their global RetrieveBlobFields / UpdateBlobFields twins). In fact, BLOB data is expected to be potentially big (more than a few MB). But you can specify a small BLOB content using an explicit conversion to the corresponding TEXT format, by calling BinToBase64WithMagic() overloaded functions when preparing an UPDATE query, or by defining a TByteDynArray published field instead of TSQLRawBlob. See also ForceBlobTransfert and ForceBlobTransfertTable[] properties of TSQLRestClientURI.
Note that there was a breaking change about the TSQLRecord.Create / FillPrepare / CreateAndFillPrepare and TSQLRest.OneFieldValue / MultiFieldValues methods: for historical reasons, they expected parameters to be marked as % in the SQL WHERE clause, and inlined via :(...): as stated below - since revision 1.17 of the framework, those methods expect parameters marked as ? and with no :(...):. Due to this breaking change, user code review is necessary if you want to upgrade the engine from 1.16 or previous. In all cases, using ? is less confusing for new users, and more close to the usual way of preparing database queries - e.g. as stated below. Both TSQLRestClient.ExecuteFmt / ListFmt methods are not affected by this change, since they are just wrappers to the FormatUTF8() function.
For the most complex codes, you may want to prepare ahead the WHERE clause of the ORM request. You may use the overloaded FormatUTF8() function as such:
var where: RawUTF8;
begin
where := FormatUTF8('id=?', [], [SomeID]);
if add_active then
where := FormatUTF8('% and active=?', [where], [ActiveFlag]);
if add_date_ini then
where := FormatUTF8('% and date_ini>=?', [where], [DateToSQL(Date-2)]);
...
Then the request will be easy to create, and fast to execute, thanks to prepared statements in the framework database layer.
5.3.3. Introducing TSQLTableJSON
As we stated above, [CreateAnd]FillPrepare / FillOne methods are implemented via an internal TSQLTableJSON instance.
In short, TSQLTableJSON will expect some JSON content as input, will parse it in rows and columns, associate it with one or more optional TSQLRecord class types, then will let you access the data via its Get* methods.
You can use this TSQLTableJSON class as in the following example:
procedure WriteBabiesStartingWith(const Letters: RawUTF8; Sex: TSex);
var aList: TSQLTableJSON;
Row: integer;
beginaList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate','Name LIKE ? AND Sex = ?',[Letters+'%',ord(Sex)]);if aList=nilthenraise Exception.Create('Impossible to retrieve data from Server');
tryfor Row := 1 to aList.RowCount do
writeln('ID=',aList.GetAsInteger(Row,0),' BirthDate=',aList.Get(Row,1));
finally
aList.Free;
end;
end;
For a record with a huge number of fields, specifying the needed fields could save some bandwidth. In the above sample code, the ID column has a field index of 0 (so is retrieved via aList.GetAsInteger(Row,0)) and the BirthDate column has a field index of 1 (so is retrieved as a PUTF8Char via aList.Get(Row,1)). All data rows are processed via a loop using the RowCount property count - first data row is indexed as 1, since the row 0 will contain the column names.
The TSQLTable class has some methods dedicated to direct cursor handling, as such:
procedure WriteBabiesStartingWith(const Letters: RawUTF8; Sex: TSex);
var aList: TSQLTableJSON;
begin
aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate',
'Name LIKE ? AND Sex = ?',[Letters+'%',ord(Sex)]);
trywhile aList.Step dowriteln('ID=',aList.Field(0),' BirthDate=',aList.Field(1));finally
aList.Free;
end;
end;
By using the TSQLTable.Step method, you do not need to check that aList<>nil, since it will return false if aList is not assigned. And you do not need to access the RowCount property, nor specify the current row number.
We may have used not the field index, but the field name, within the loop:
You can also access the field values using late-binding and a local variant, which gives some perfectly readable code:
procedure WriteBabiesStartingWith(const Letters: RawUTF8; Sex: TSex);
var baby: variant;
beginwith Client.MultiFieldValues(TSQLBaby,'ID,BirthDate',
'Name LIKE ? AND Sex = ?',[Letters+'%',ord(Sex)]) dotrywhile Step(false,@baby) dowriteln('ID=',baby.ID,' BirthDate=',baby.BirthDate);finally
Free;
end;
end;
In the above code, late-binding will search for the "ID" and "BirthDate" fields at runtime. But the ability to write baby.ID and baby.BirthDate is very readable. Using a with ... do statement makes the code shorter, but should be avoided if it leads into confusion, e.g. in case of more complex process within the loop.
See also the following methods of TSQLRest: OneFieldValue, OneFieldValues, MultiFieldValue, MultiFieldValues which are able to retrieve either a TSQLTableJSON, or a dynamic array of integer or RawUTF8. And also List and ListFmt methods of TSQLRestClient, if you want to make a JOIN against multiple tables at once.
A TSQLTableJSON content can be associated to a TGrid in order to produce an User Interface taking advantage of the column types, as retrieved from the associated TSQLRecordRTTI. The TSQLTableToGrid class is able to associate any TSQLTable to a standard TDrawGrid, with some enhancements: themed drawing, handle Unicode, column types (e.g. boolean are displayed as check-boxes, dates as text, etc...), column auto size, column sort, incremental key lookup, optional hide IDs, selection...
5.3.4. Note about query parameters
(this paragraph is not mandatory to be read at first, so you can skip it if you do not need to know about the mORMot internals - just remember that ? bound parameters are inlined as :(...): in the JSON transmitted content so can be set directly as such in any WHERE clause)
If you consider the first sample code:
aMale := TSQLBaby.CreateAndFillPrepare(Client,
'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]);
This will execute a SQL statement, with an ORM-generated SELECT, and a WHERE clause using two parameters bound at execution, containing 'A%' RawUTF8 text and 1 integer value.
In fact, from the SQL point of view, the CreateAndFillPrepare() method as called here is exactly the same as:
aMale := TSQLBaby.CreateAndFillPrepare(Client,
'Name LIKE :(''A%''): AND Sex = :(1):');
or
aMale := TSQLBaby.CreateAndFillPrepare(Client,
'Name LIKE :(%): AND Sex = :(%):',['''A%''',ord(sMale)],[]));
or
aMale := TSQLBaby.CreateAndFillPrepare(Client,
FormatUTF8('Name LIKE :(%): AND Sex = :(%):',['''A%''',ord(sMale)]));
First point is that the 'A' letter has been embraced with quotes, as expected per the SQL syntax. In fact, Name LIKE :(%): AND Sex = :(%):', ['''A%''',ord(sMale)] is expected to be a valid WHERE clause of a SQL statement.
Note we used single quotes, but we may have used double quotes (") inside the :( ): statements. In fact, SQLite3 expects single quotes in its raw SQL statements, whereas our prepared statements :( ): will handle both single ' and double " quotes. Just to avoid any confusion, we'll always show single quotes in the documentation. But you can safely use double quotes within :( ): statements, which could be more convenient than single quotes, which should be doubled within a pascal constant string ''.
The only not-obvious syntax in the above code is the :(%): used for defining prepared parameters in the format string.
In fact, the format string will produce the following WHERE clause parameter as plain text:
aMale := TSQLBaby.CreateAndFillPrepare(Client,
'Name LIKE :(''A%''): AND Sex = :(1):');
So that the following SQL query will be executed by the database engine, after translation by the ORM magic:
SELECT * FROM Baby WHERE Name LIKE ? AND Sex = ?;
With the first ? parameter bound with 'A%' value, and the second with 1.
In fact, when the framework finds some :( ): in the SQL statement string, it will prepare a SQL statement, and will bound the parameters before execution (in our case, text A and integer 1), reusing any previous matching prepared SQL statement. See below for more details about this mechanism.
To be clear, without any prepared statement, you could have used:
aMale := TSQLBaby.CreateAndFillPrepare(Client,
'Name LIKE % AND Sex = %',['''A%''',ord(sMale)],[]);
or
aMale := TSQLBaby.CreateAndFillPrepare(Client,
FormatUTF8('Name LIKE % AND Sex = %',['''A%''',ord(sMale)]));
which will produce the same as:
aMale := TSQLBaby.CreateAndFillPrepare(Client,
'Name LIKE ''A%'' AND Sex = 1');
So that the following SQL statement will be executed:
SELECT * FROM Baby WHERE Name LIKE 'A%' AND Sex = 1;
Note that we prepared the SQL WHERE clause, so that we could use the same request statement for all females with name starting with the character 'D':
aFemale := TSQLBaby.CreateAndFillPrepare(Client,
'Name LIKE :(%): AND Sex = :(%):', ['''D%''',ord(sFemale)]);
Using a prepared statement will speed up the database engine, because the SQL query will have to be parsed and optimized only once.
The second query method, i.e.
aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate',
'Name LIKE ? AND Sex = ?',[Letters+'%',ord(Sex)]);
is the same as this code:
aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate',
'Name LIKE :(%): AND Sex = :(%):',[QuotedStr(Letters+'%'),ord(Sex)],[]);
or
aList := Client.MultiFieldValues(TSQLBaby,'ID,BirthDate',
FormatUTF8('Name LIKE :(%): AND Sex = :(%):',[QuotedStr(Letters+'%'),ord(Sex)]));
In both cases, the parameters will be inlined, in order to prepare the statements, and improve execution speed.
We used the QuotedStr standard function to embrace the Letters parameter with quotes, as expected per the SQL syntax.
Of course, using '?' and bounds parameters is much easier than '%' and manual :(%): in-lining with a QuotedStr() function call. In your client code, you should better use '?' - but if you find some ':(%):' in the framework source code and when a WHERE clause is expected within the transmitted JSON content, you won't be surprised.
5.4. Automatic TSQLRecord memory handling
Working with objects is pretty powerful, but requires to handle manually the created instances life time, via try .. finally blocks. Most of the time, the TSQLRecord life time will be very short: we allocate one instance on a local variable, then release it when it goes out of scope.
If we take again the TSQLBaby sample, we may write:
function NewMaleBaby(Client: TSQLRest; const Name,Address: RawUTF8): TID;var Baby: TSQLBaby; // store a recordbegin
Baby := TSQLBaby.Create;
try
Baby.Name := Name;
Baby.Address := Address;
Baby.BirthDate := Date;
Baby.Sex := sMale;
result := Client.Add(Baby,true);finally
Baby.Free;
end;
end;
To ease this pretty usual pattern, the framework offers some kind of automatic memory management at TSQLRecord level:
function NewMaleBaby(Client: TSQLRest; const Name,Address: RawUTF8): TID;var Baby: TSQLBaby; // store a recordbeginTSQLBaby.AutoFree(Baby); // no try..finally needed!
Baby.Name := Name;
Baby.Address := Address;
Baby.BirthDate := Date;
Baby.Sex := sMale;
result := Client.Add(Baby,true);end; // local Baby instance will be released here
It may also be useful for queries. Instead of writing:
var aMale: TSQLBaby;
...
aMale := TSQLBaby.CreateAndFillPrepare(Client,
'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]);
trywhile aMale.FillOne do
DoSomethingWith(aMale);
finally
aMale.Free;
end;
We may write:
var aMale: TSQLBaby;
...
TSQLBaby.AutoFree(aMale,Client,'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]);
while aMale.FillOne do
DoSomethingWith(aMale);
Without the need to write the try ... finally block.
Be aware that it does not introduce some kind of magic garbage collector, as available in C# or Java. It is not even similar to the ARC memory model used by Apple and the DelphiNextGen compiler. It is just some syntaxic sugar creating a local hidden IAutoFree interface, which will be released at the end of the local method by the compiler, and also release all associated class instances. So the local class instances should stay in the local scope, and should not be sent and stored in another process: in such cases, you may encounter access violation issues.
Due to an issue (feature?) in the FPC implementation of interfaces - see http://bugs.freepascal.org/view.php?id=26602 - the above code will not work directly. You should assign the result of this method to a local IAutoFree variable, as such:
var aMale: TSQLBaby;
auto: IAutoFree;
...
auto := TSQLBaby.AutoFree(aMale,Client,'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]);
while aMale.FillOne do
DoSomethingWith(aMale);
One alternative may be to use a with statement, which prevents the need of defining a local variable:
var aMale: TSQLBaby;
...
withTAutoFree.One(aMale,TSQLBaby.CreateAndFillPrepare(Client,
'Name LIKE ? AND Sex = ?',['A%',ord(sMale)])) dowhile aMale.FillOne do
DoSomethingWith(aMale);
var aMale: TSQLBaby;
...
with TSQLBaby.AutoFree(aMale,Client,'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]) dowhile aMale.FillOne do
DoSomethingWith(aMale);
If you want your code to cross-compile with both Delphi and FPC, consider this expectation of the FPC compiler.
5.5. Objects relationship: cardinality
All previous code is fine if your application requires "flat" data. But most of the time, you'll need to define master/child relationship, perhaps over several levels. In data modeling, the cardinality of one data table with respect to another data table is a critical aspect of database design. Relationships between data tables define cardinality when explaining how each table links to another.
In the relational model, tables can have the following cardinality, i.e. can be related as any of:
"One to one".
"Many to one" (rev. "One to many");
"Many to many" (or "has many").
Our mORMot framework handles all those kinds of cardinality.
5.5.1. "One to one" or "One to many"
5.5.1.1. TSQLRecord published properties are IDs, not instance
In order to handle "One to one" or "One to many" relationship between tables (i.e. normalized Master/Detail in a classical RDBMS approach), you could define TSQLRecordpublished properties in the object definition.
As stated by TSQLRecord fields definition, TSQLRecord published properties do not contain an instance of the TSQLRecord class. They will instead contain pointer(RowID), and will be stored as an INTEGER in the database.
So the main rule is to never use directly such published properties, as if they were regular class instance: otherwise you'll have an unexpected access violation error.
5.5.1.2. Transtyping IDs
When creating such records, use temporary instances for each detail object, as such:
var One, Two: TSQLMyFileInfo;
MyFile: TSQLMyFile;
begin
One := TSQLMyFileInfo.Create;
Two := TSQLMyFileInfo.Create;
MyFile := TSQLMyFile.Create;
try
One.MyFileDate := ....
One.MyFileSize := ...
MyFile.FirstOne := TSQLMyFileInfo(MyDataBase.Add(One,True)); // add One and store ID in MyFile.FirstOne
Two.MyFileDate := ....
Two.MyFileSize := ...
MyFile.SecondOne:= TSQLMyFileInfo(MyDataBase.Add(Two,True)); // add Two and store ID in MyFile.SecondOne
MyDataBase.Add(MyFile,true);
finally
MyFile.Free;
Two.Free;
One.Free;
end;
end;
The first two statements, using a class/pointer type cast will work only in 32-bit (since ID is an integer). Using TSQLRecord.AsTSQLRecord property will work on all platforms, including 64-bit, and is perhaps easier to deal with in your code.
When accessing the detail objects, you should not access directly to FirstOne or SecondOne properties (there are not class instances, but integer IDs), then use instead the TSQLRecord. Create(aClient: TSQLRest; aPublishedRecord: TSQLRecord: ForUpdate: boolean=false) overloaded constructor, as such:
var One: TSQLMyFileInfo;
MyFile: TSQLMyFile;
begin
MyFile := TSQLMyFile.Create(Client,aMyFileID);
try// here MyFile.FirstOne.MyFileDate will trigger an access violationOne := TSQLMyFileInfo.Create(Client,MyFile.FirstOne);try// here you can access One.MyFileDate or One.MyFileSizefinally
One.Free;
end;
finally
MyFile.Free;
end;
end;
Or with a with statement:
var MyFile: TSQLMyFile;
begin
MyFile := TSQLMyFile.Create(Client,aMyFileID);
try// here MyFile.FirstOne.MyFileDate will trigger an access violationwith TSQLMyFileInfo.Create(Client,MyFile.FirstOne) dotry// here you can access MyFileDate or MyFileSizefinally
Free;
end;
finally
MyFile.Free;
end;
end;
Mapping a TSQLRecord field into an integer ID is a bit difficult to learn at first. It was the only way we found out in order to define a "one to one" or "one to many" relationship within the class definition, without any property attribute features of the Delphi compiler (only introduced in newer versions). The main drawback is that the compiler won't be able to identify at compile time some potential GPF issues at run time. This is up to the developer to write correct code, when dealing with TSQLRecord properties. Using AsTSQLRecord property and overloaded TSQLRecord. Create(aPublishedRecord) constructor will help a lot.
5.5.1.3. Automatic instantiation and JOINed query
Having to manage at hand all nested TSQLRecord instances can be annoying, and error-prone.
As an alternative, if you want to retrieve a whole TSQLRecord instance including its nested TSQLRecord published properties, you can use either of those two constructors:
Will auto-instantiate all TSQLRecord published properties;
Then the ORM core will retrieve all properties, included nested TSQLRecord via a SELECT .... LEFT JOIN ... statement;
Then the nested TSQLRecord will be released at Destroy of the main instance (to avoid any unexpected memory leak).
So you can safely write:
var MyFile: TSQLMyFile;
beginMyFile := TSQLMyFile.CreateJoined(Client,aMyFileID);try// here MyFile.FirstOne and MyFile.SecondOne are true instances// and have already retrieved from the database by the constructor// so you can safely access MyFile.FirstOne.MyFileDate or MyFile.SecondOne.MyFileSize here!finally
MyFile.Free; // will release also MyFile.FirstOne and MyFile.SecondOneend;
end;
Note that this will work as expected when retrieving some data from the database, but, in the current implementation of the ORM, any Update() call will manage only the main TSQLRecord properties, and the nested TSQLRecord properties ID, not the nested properties values. For instance, in code above, aClient.Update(MyFile) will update the TSQLMyFile table, but won't reflect any modification to MyFile.FirstOne or MyFile.SecondOne properties. This limitation may be removed in the future - you may ask explicitly for this feature request.
In systems analysis, a many-to-many relationship is a type of cardinality that refers to the relationship between two entities (see also Entity-Relationship Model) A and B in which A may contain a parent row for which there are many children in B and vice versa. For instance, think of A as Authors, and B as Books. An Author can write several Books, and a Book can be written by several Authors. Because most database management systems only support one-to-many relationships, it is necessary to implement such relationships physically via a third and fourth junction table, say, AB with two one-to-many relationships A -> AB and B -> AB. In this case the logical primary key for AB is formed from the two foreign keys (i.e. copies of the primary keys of A and B).
From the record point of view, and to follow the ORM vocabulary (in Ruby on Rails, Python, or other ActiveRecord clones), we could speak of "has many" relationship. In the classic RDBMS implementation, a pivot table is created, containing two references to both related records. Additional information can be stored within this pivot table. It could be used, for instance, to store association time or corresponding permissions of the relationship. This is called a "has many through" relationship.
In fact, there are several families of ORM design, when implementing the "many to many" cardinality:
Map collections into JOINed query from the ORM (i.e. pivot tables are abstracted from object lists or collections by the framework, to implement "has many" relationship, but you will have to define lazy loading and won't have "has many through" relationship at hand);
Explicitly handle pivot tables as ORM classes, and provide methods to access to them (it will allow both "has many" and "has many through" relationship).
Store collections within the ORM classes property (data sharding).
In the mORMot framework, we did not implement the 1st implementation pattern, but the 2nd and 3rd:
You can map the DB with dedicated TSQLRecordMany classes, which allows some true pivot table to be available (that is the 2nd family), introducing true "has many through" cardinality;
But for most applications, it sounds definitively more easy to use TCollection (of TPersistent classes) or dynamic arrays within one TSQLRecord class, and data sharding (i.e. the 3rd family).
Up to now, there is no explicit Lazy Loading feature in our ORM. There is no native handling of TSQLRecord collections or lists (as they do appear in the first family of ORMs). This could sound like a limitation, but it allows to manage exactly the data to be retrieved from the server in your code, and maintain bandwidth and memory use as low as possible. Use of a pivot table (via the TSQLRecordMany kind of records) allows tuned access to the data, and implements optimal lazy loading feature. Note that the only case when some TSQLRecord instances are automatically created by the ORM is for those TSQLRecordMany published properties.
5.5.2.1.1. Embedding all needed data within the record
Defining a pivot table is a classic and powerful use of relational database, and unleash its power (especially when linked data is huge).
But it is not easy nor natural to properly handle it, since it introduces some dependencies from the DB layer into the business model. For instance, it does introduce some additional requirements, like constraints / integrity checking and tables/classes inter-dependency.
Furthermore, in real life, we do not have such a separated storage, but we store all details within the main data. So for a Domain-Driven Design, which tries to map the real objects of its own domain, such a pivot table is breaking the business logic. With today's computer power, we can safely implement a centralized way of storing data into our data repository.
A shared nothing architecture (SN) is a distributed computing architecture in which each node is independent and self-sufficient, and there is no single point of contention across the system. People typically contrast SN with systems that keep a large amount of centrally-stored state information, whether in a database, an application server, or any other similar single point of contention.
As we stated in TSQLRecord fields definition, in our ORM, high-level types like dynamic arrays or TPersistent / TCollection properties are stored as BLOB or TEXT inside the main data row. There is no external linked table, no Master/Detail to maintain. In fact, each TSQLRecord instance content could be made self-contained in our ORM.
In particular, you may consider using our TDocVariant custom variant type stored in a variant published property. It will allow to store any complex document, of nested objects or objects. They will be efficiently stored and transmitted as JSON.
When the server starts to have an increasing number of clients, such a data layout could be a major benefit. In fact, the so-called sharding, or horizontal partitioning of data, is a proven solution for web-scale databases, such as those in use by social networking sites. How does EBay or Facebook scale with so many users? Just by sharding.
A simple but very efficient sharding mechanism could therefore be implemented with our ORM. In-memory databases, or SQLite3 are good candidate for light speed data process. Even SQLite could scale very well in most cases, when properly used - see below.
Storing detailed data in BLOB or in TEXT as JSON could first sounds a wrong idea. It does break one widely accepted principle of the RDBMS architecture. But even Google had to break this dogma. And when MySQL or any similar widely used databases try to implement sharding, it does need a lot of effort. Others, like the NoSQL MongoDB, are better candidates: they are not tight to the SQL/RDBMS flat scheme.
Finally, this implementation pattern fits much better with a Domain-Driven design. See below.
Therefore, on second thought, having at hand a shared nothing architecture could be a great advantage. Our ORM is already ready to break the table-oriented of SQL. Let us go one step further.
5.5.2.1.2. Nesting objects and arrays
The "has many" and "has many through" relationship we just described does follow the classic process of rows association in a relational database, using a pivot table. This does make sense if you have some DB background, but it is sometimes not worth it.
One drawback of this approach is that the data is split into several tables, and you should carefully take care of data integrity to ensure for instance that when you delete a record, all references to it are also deleted in the associated tables. Our ORM engine will take care of it, but could fail sometimes, especially if you play directly with the tables via SQL, instead of using high-level methods like FillMany* or DestGetJoined.
Another potential issue is that one business logical unit is split into several tables, therefore into several diverse TSQLRecord and TSQLRecordMany classes. From the ORM point of view, this could be confusing.
Starting with the revision 1.13 of the framework, dynamic arrays, TStrings and TCollection can be used as published properties in the TSQLRecord class definition. This won't be strong enough to implement all possible "Has many" architectures, but could be used in most case, when you need to add a list of records within a particular record, and when this list won't have to be referenced as a stand-alone table.
Dynamic arrays will be stored as BLOB fields in the database, retrieved with Base64 encoding in the JSON transmitted stream, then serialized using the TDynArray wrapper. Therefore, only Delphi clients will be able to use this field content: you'll loose the AJAX capability of the ORM, at the benefit of better integration with object pascal code. Some dedicated SQL functions have been added to the SQLite engine, like IntegerDynArrayContains, to search inside this BLOB field content from the WHERE clause of any search (see below). Those functions are available from AJAX queries.
TPersistent / TStrings and TCollection / TObjectList will be stored as TEXT fields in the database, following the ObjectToJSON function format: you can even serialize any TObject class, via a previous call to the TJSONSerializer. RegisterCustomSerializer class method - see below - or TObjectList list of instances, if they are previously registered by TJSONSerializer. RegisterClassForJSON - see below. This format contains only valid JSON arrays or objects: so it could be un-serialized via an AJAX application, for instance.
About this (trolling?) subject, and why/when you should use plain Delphi objects or arrays instead of classic Master/Detail DB relationship, please read "Objects, not tables" and "ORM is not DB" paragraphs below.
Here, we defined two indexed keys, ready to access any data record:
Via the ID: TID property defined at TSQLRecord level, which will map the SQLite3RowIDprimary key;
Via the Name: RawUTF8 property, which will was marked to be indexed by setting the "stored AS_UNIQUE" attribute.
Then, any kind of data may be stored in the Data: variant published property. In the database, it will be stored as JSON UTF-8 text, ready to be retrieved from any client, including AJAX / HTML5 applications. Delphi clients or servers will access those data via late-binding, from its TDocVariant instance.
You just reproduced the schema-less approach of the NoSQL database engines, in a few lines of code! Thanks to the mORMot's below design, your applications are able to store any kind of document, and easily access to them via HTTP.
The documents stored in such a database can have varying sets of fields, with different types for each field. One could have the following objects in a single collection of our Data: variant rows:
{ name : "Joe", x : 3.3, y : [1,2,3] }{ name : "Kate", x : "abc" }{ q : 456 }
Of course, when using the database for real problems, the data does have a fairly consistent structure. Something like the following will be more common, e.g. for a table persisting student objects:
{ name : "Joe", age : 30, interests : "football" }{ name : "Kate", age : 25 }
Generally, there is a direct analogy between this schema-less style and dynamically typed languages. Constructs such as those above are easy to represent in PHP, Python and Ruby. And, thanks to our TDocVariantlate-binding magic, even our good Delphi is able to handle those structures in our code. What we are trying to do here is make this mapping to the database natural, like:
var aRec: TSQLRecordData;
aID: TID;
begin// initialization of one record
aRec := TSQLRecordData.Create;
aRec.Name := 'Joe'; // one unique key
aRec.data := _JSONFast('{name:"Joe",age:30}'); // create a TDocVariant// or we can use this overloaded constructor for simple fields
aRec := TSQLRecordData.Create(['Joe',_ObjFast(['name','Joe','age',30])]);
// now we can play with the data, e.g. via late-binding:
writeln(aRec.Name); // will write 'Joe'
writeln(aRec.Data); // will write '{"name":"Joe","age":30}' (auto-converted to JSON string)
aRec.Data.age := aRec.Data.age+1; // one year older
aRec.Data.interests := 'football'; // add a property to the schema
aID := aClient.Add(aRec,true); // will store {"name":"Joe","age":31,"interests":"footbal"}
aRec.Free;
// now we can retrieve the data either via the aID created integer, or via Name='Joe'end;
One of the great benefits of these dynamic objects is that schema migrations become very easy. With a traditional RDBMS, releases of code might contain data migration scripts. Further, each release should have a reverse migration script in case a rollback is necessary. ALTER TABLE operations can be very slow and result in scheduled downtime.
With a schema-less organization of the data, 90% of the time adjustments to the database become transparent and automatic. For example, if we wish to add GPA to the student objects, we add the attribute, re-save, and all is well - if we look up an existing student and reference GPA, we just get back null. Further, if we roll back our code, the new GPA fields in the existing objects are unlikely to cause problems if our code was well written.
In fact, SQlite3 is so efficient about its indexes B-TREE storage, that such a structure may be used as a credible alternative to much heavier NoSQL engines, like MongoDB or CouchDB. With the possibility to add some "regular" fields, e.g. plain numbers (like ahead-computed aggregation values), or text (like a summary or description field), you can still use any needed fast SQL query, without the complexity of map/reduce algorithm used by the NoSQL paradigm. You could even use the Full Text Search - FTS3/FTS4/FTS5, see below - or RTREE extension advanced features of SQLite3 to perform your queries. Then, thanks to mORMot's ability to access any external database engine, you are able to perform a JOINed query of your schema-less data with some data stored e.g. in an Oracle, PostgreSQL or MS SQL enterprise database. Or switch later to a true MongoDB storage, in just one line of code - see below.
5.5.2.1.2.1.2. JSON operations from SQL code
As we stated, any variant field will be serialized as JSON, then stored as plain TEXT in the database. In order to make a complex query on the stored JSON, you could retrieve it in your end-user code, then use the corresponding TDocVariant instance to perform the search on its content. Of course, all this has a noticeable performance cost, especially when the data tend to grow.
The natural way of solving those performance issue is to add some "regular" RDBMS fields, with a proper index, then perform the requests on those fields. But sometimes, you may need to do some addition query, perhaps in conjunction with "regular" field lookup, on the JSON data stored itself. In order to avoid the slowest conversion to the ORM client side, we defined some SQL functions, dedicated to JSON process.
The first is JsonGet(), and is able to extract any value from the TEXT field, mapping a variant:
JsonGet(ArrColumn,0)
returns a property value by index, from a JSON array
JsonGet(ObjColumn,'PropName')
returns a property value by name, from a JSON object
JsonGet(ObjColumn,'Obj1.Obj2.Prop')
returns a property value by path, including nested JSON objects
JsonGet(ObjColumn,'Prop1,Prop2')
extract properties by name, from a JSON object
JsonGet(ObjColumn,'Prop1,Obj1.Prop')
extract properties by name (including nested JSON objects), from a JSON object
JsonGet(ObjColumn,'Prop*')
extract properties by wildchar name, from a JSON object
JsonGet(ObjColumn,'Prop*,Obj1.P*')
extract properties by wildchar name (including nested JSON objects), from a JSON object
If no value does match, this function will return the SQL NULL. If the matching value is a simple JSON text or number, it will be returned as a TEXT, INTEGER or DOUBLE value, ready to be passed as a result column or any WHERE clause. If the returned value is a nested JSON object or array, it will be returned as TEXT, serialized as JSON; as a consequence, you may use it as the source of another JsonGet() function, or even able to gather the results via the CONCAT() aggregate function.
The comma-separated syntax allowed in the property name parameter (e.g. 'Prop1,Prop2,Prop3'), will search for several properties at once in a single object, returning a JSON object of all matching values - e.g. '{"Prop2":"Value2","Prop3":123}' if the Prop1 property did not appear in the stored JSON object.
If you end the property name with a * character, it will return a JSON object, with all matching properties. Any nested object will have its property names be flattened as {"Obj1.Prop":...}, within the returned JSON object. Note that the comma-separated syntax also allows such wildchar search, so that e.g.
JsonGet(ObjColumn,'owner') = {"login":"smith","id":123456} as TEXT
JsonGet(ObjColumn,'owner.login') = "smith" as TEXT
JsonGet(ObjColumn,'owner.id') = 123456 as INTEGER
JsonGet(ObjColumn,'owner.name') = NULL
JsonGet(ObjColumn,'owner.login,owner.id') = {"owner.login":"smith","owner.id":123456} as TEXT
JsonGet(ObjColumn,'owner.I*') = {"owner.id:123456} as TEXT
JsonGet(ObjColumn,'owner.*') = {"owner.login":"smith","owner.id":123456} as TEXT
JsonGet(ObjColumn,'unknown.*') = NULL
Another function, named JsonHas() is similar to JsonGet(), but will return TRUE or FALSE depending if the supplied property (specified by name or index) do exist. It may be faster to use JsonHas() than JsonGet() e.g. in a WHERE clause, when you do not want to process this property value, but only return data rows containing needed information.
Since the process will take place within the SQLite3 engine itself, and since they use a SAX-like fast approach (without any temporary memory allocation during its search), those JSON functions could be pretty efficient, and proudly compare to some dedicated NoSQL engines.
5.5.2.1.2.2. Dynamic arrays fields
5.5.2.1.2.2.1. Dynamic arrays from Delphi Code
For instance, here is how the regression tests included in the framework define a TSQLRecord class with some additional dynamic arrays fields:
This TSQLRecordPeopleArray class inherits from TSQLRecordPeople, so it will add some new UTF8, Ints, Currency and FileVersion fields to this root class fields (FirstName, LastName, Data, YearOfBirth, YearOfDeath).
Some content is added to the PeopleArray table, with the following code:
var V: TSQLRecordPeople;
VA: TSQLRecordPeopleArray;
FV: TFV;
(...)
V2.FillPrepare(Client,'LastName=:(''Dali''):');
n := 0;
while V2.FillOne dobegin
VA.FillFrom(V2); // fast copy some content from TSQLRecordPeople
The FillPrepare / FillOne method are used to loop through all People table rows with a LastName column value equal to 'Dali' (with a prepared statement thanks to :( ):), then initialize a TSQLRecordPeopleArray instance with those values, using a FillFrom method call.
inc(n);
if n and 31=0 thenbegin
VA.UTF8 := '';
VA.DynArray('Ints').Add(n);
Curr := n*0.01;
VA.DynArray(2).Add(Curr);
FV.Major := n;
FV.Minor := n+2000;
FV.Release := n+3000;
FV.Build := n+4000;
str(n,FV.Main);
str(n+1000,FV.Detailed);
VA.DynArray('FileVersion').Add(FV);endelse
str(n,VA.fUTF8);
The n variable is used to follow the PeopleArray number, and will most of the time set its textual converted value in the UTF8 column, and once per 32 rows, will add one item to both VA and FVdynamic array fields.
We could have used normal access to VVA and FVdynamic arrays, as such:
But the DynArray method is used instead, to allow direct access to the dynamic array via a TDynArray wrapper. Those two lines behave therefore the same as this code:
Note that the DynArray method can be used via two overloaded set of parameters: either the field name ('Ints'), or an index value, as was defined in the class definition. So we could have written:
Of course, using the DynArray method is a bit slower than direct SetLength / Ints[] use. Using DynArray with an index should be also a bit faster than using DynArray with a textual field name (like 'Ints'), with the benefit of perhaps less keyboard errors at typing the property name. But if you need to fast add a lot of items to a dynamic array, you could use a custom TDynArray wrapper with an associated external Count value, or direct access to its content (like SetLength + Ints[])
Then the FillPrepare / FillOne loop ends with the following line:
Check(Client.Add(VA,true)=n);end;
This will add the VA fields content into the database, creating a new row in the PeopleArray table, with an ID following the value of the n variable. All dynamic array fields will be serialized as BLOB into the database table.
5.5.2.1.2.2.2. Dynamic arrays from SQL code
In order to access the BLOB content of the dynamic arrays directly from SQL statements, some new SQL functions have been defined in TSQLDataBase, named after their native simple types:
ByteDynArrayContains(BlobField,I64);
WordDynArrayContains(BlobField,I64);
IntegerDynArrayContains(BlobField,I64);
CardinalDynArrayContains(BlobField,I64);
CurrencyDynArrayContains(BlobField,I64) - in this case, I64 is not the currency value directly converted into an Int64 value (i.e. not Int64(aCurrency)), but the binary mapping of the currency value, i.e. aCurrency*10000 or PInt64(@aCurrency)^;
Int64DynArrayContains(BlobField,I64);
RawUTF8DynArrayContainsCase(BlobField,'Text');
RawUTF8DynArrayContainsNoCase(BlobField,'Text').
Those functions allow direct access to the BLOB content like this:
for i := 1 to n shr 5 dobegin
k := i shl 5;
aClient.OneFieldValues(TSQLRecordPeopleArray,'ID',FormatUTF8('IntegerDynArrayContains(Ints,?)',[],[k]),IDs);
Check(length(IDs)=n+1-32*i);
for j := 0 to high(IDs) do
Check(IDs[j]=k+j);
end;
In the above code, the WHERE clause of the OneFieldValues method will use the dedicated IntegerDynArrayContainsSQL function to retrieve all records containing the specified integer value k in its Ints BLOB column. With such a function, all the process is performed Server-side, with no slow data transmission nor JSON/Base64 serialization.
For instance, using such a SQL function, you are able to store multiple TSQLRecord. ID field values into one TIntegerDynArray property column, and have direct search ability inside the SQL statement. This could be a very handy way of implementing "one to many" or "many to many" relationship, without the need of a pivot table.
Those functions were implemented to be very efficient for speed. They won't create any temporary dynamic array during the search, but will access directly to the BLOB raw memory content, as returned by the SQlite engine. The RawUTF8DynArrayContainsCase / RawUTF8DynArrayContainsNoCase functions also will search directly inside the BLOB. With huge number of requests, this could be slower than using a TSQLRecordMany pivot table, since the search won't use any index, and will have to read all BLOB field during the request. But, in practice, those functions behave nicely with a relative small amount of data (up to about 50,000 rows). Don't forget that BLOB column access are very optimized in SQlite3.
For more complex dynamic array content handling, you'll have either to create your own SQL function using the TSQLDataBase. RegisterSQLFunction method and an associated TSQLDataBaseSQLFunction class, or via a dedicated Service or a stored procedure - see below on how to implement it.
5.5.2.1.2.3. TPersistent/TCollection fields
For instance, here is the way regression tests included in the framework define a TSQLRecord class with some additional TPersistent, TCollection or TRawUTF8List fields (TRawUTF8List is just a TStringList-like component, dedicated to handle RawUTF8 kind of string):
In order to avoid any memory leak or access violation, it is mandatory to initialize then release all internal property instances in the overridden constructor and destructor of the class:
var VO: TSQLRecordPeopleObject;
(...)
if Client.TransactionBegin(TSQLRecordPeopleObject) thentry
V2.FillPrepare(Client,'LastName=?',['Morse']);
n := 0;
while V2.FillOne dobegin
VO.FillFrom(V2); // fast copy some content from TSQLRecordPeople
inc(n);
VO.Persistent.One.Color := n+100;
VO.Persistent.One.Length := n;
VO.Persistent.One.Name := Int32ToUtf8(n);
if n and 31=0 thenbegin
VO.UTF8.Add(VO.Persistent.One.Name);
with VO.Persistent.Coll.Add dobegin
Color := n+1000;
Length := n*2;
Name := Int32ToUtf8(n*3);
end;
end;
Check(Client.Add(VO,true)=n);end;
Client.Commit;
except
Client.RollBack; // in case of errorend;
This will add 1000 rows to the PeopleObject table.
First of all, the adding is nested inside a transaction call, to speed up SQL INSERT statements, via TransactionBegin and Commit methods. Please note that the TransactionBegin method returns a boolean value, and should be checked in a multi-threaded or Client-Server environment (in this part of the test suit, content is accessed in the same thread, so checking the result is not mandatory, but shown here for accuracy). In the current implementation of the framework, transactions should not be nested. The typical transaction usage should be the following:
if Client.TransactionBegin(TSQLRecordPeopleObject) thentry//.... modify the database content, raise exceptions on error
Client.Commit;
except
Client.RollBack; // in case of errorend;
if Client.TransactionBeginRetry(TSQLRecordPeopleObject,20) then
...
Note that the transactions are handled according to the corresponding client session: the client should make the transaction block as short as possible (e.g. using a batch command), since any write attempt by other clients will wait for the transaction to be released (with either a commit or rollback).
The fields inherited from the TSQLRecord class are retrieved via FillPrepare / FillOne method calls, for columns with the LastName matching 'Morse'. One TPersistent property instance values are set (VO.Persistent.One), then, for every 32 rows, a new item is added to the VO.Persistent.Coll collection.
Here is the data sent for instance to the Server, when the item with ID=32 is added:
Up to revision 1.15 of the framework, the transmitted JSON content was not a true JSON object, but sent as RawUTF8 TEXT values (i.e. every double-quote (") character is escaped as " - e.g. "UTF8":"["32"]"). Starting with revision 1.16 of the framework, the transmitted data is a true JSON object, to allow better integration with an AJAX client. That is, UTF8 field is transmitted as a valid JSON array of string, and Persistent as a valid JSON object with nested objects and arrays.
When all 1000 rows were added to the database file, the following loop is called once with direct connection to the DB engine, once with a remote client connection (with all available connection protocols):
for i := 1 to n dobegin
VO.ClearProperties;
Client.Retrieve(i,VO);
Check(VO.ID=i);
Check(VO.LastName='Morse');
Check(VO.UTF8.Count=i shr 5);
for j := 0 to VO.UTF8.Count-1 do
Check(GetInteger(pointer(VO.UTF8[j]))=(j+1) shl 5);
Check(VO.Persistent.One.Length=i);
Check(VO.Persistent.One.Color=i+100);
Check(GetInteger(pointer(VO.Persistent.One.Name))=i);
Check(VO.Persistent.Coll.Count=i shr 5);
for j := 0 to VO.Persistent.Coll.Count-1 dowith VO.Persistent.Coll[j] dobegin
k := (j+1) shl 5;
Check(Color=k+1000);
Check(Length=k*2);
Check(GetInteger(pointer(Name))=k*3);
end;
end;
All the magic is made in the Client.Retrieve(i,VO) method. Data is retrieved from the database as TEXT values, then un-serialized from JSON arrays or objects into the internal TRawUTF8List and TPersistent instances.
When the ID=33 row is retrieved, the following JSON content is received from the server:
In contradiction with POST content, this defines no valid nested JSON objects nor arrays, but UTF8 and Persistent fields transmitted as JSON strings. This is a known limitation of the framework, due to the fact that it is much faster to retrieve directly the text from the database than process for this operation. For an AJAX application, this won't be difficult to use a temporary string property, and evaluate the JSON content from it, in order to replace the property with a corresponding object content. Implementation may change in the future.
5.5.2.1.2.4. Any TObject, including TObjectList
Not only TPersistent, TCollection and TSQLRecord types can be serialized by writing all published properties. The ORM core of mORMot uses ObjectToJSON() and JSONToObject() (aka TJSONSerializer.WriteObject) functions to process proper JSON serialization.
You have two methods to register JSON serialization for any kind of class:
Custom serialization via read and write callbacks - see TJSONSerializer. RegisterCustomSerializerbelow;
TObjectList instances, after a proper call to TJSONSerializer. RegisterClassForJSONbelow.
In the database, such kind of objects will be stored as TEXT (serialized as JSON), and transmitted as regular JSON objects or arrays when working in Client-Server mode.
In fact, mORMot's integration with MongoDB has been optimized so that any of those high-level properties (like dynamic arrays, variants and TDocVariant, or any class) will be stored as BSON documents on the MongoDB server. If those types are able to be serialized as JSON - which is the case for simple types, variants and for any dynamic array / record custom types - see below, then the mORMotDB.pas unit will store this data as BSON objects or arrays on the server side, and not as BLOB or JSON text (as with SQL back-ends). You will be able to query by name any nested sub-document or sub-array, in the MongoDB collection.
As such, data sharing with mORMot will benefit of RDBMS back-end, as a reliable and proven solution, but also of the latest NoSQL technology.
5.5.2.2. ORM implementation via pivot table
Data sharding just feels natural, from the ORM point of view.
But defining a pivot table is a classic and powerful use of relational database, and will unleash its power:
When data is huge, you can query only for the needed data, without having to load the whole content (it is something similar to lazy loading in ORM terminology);
In a master/detail data model, sometimes it can be handy to access directly to the detail records, e.g. for data consolidation;
And, last but not least, the pivot table is the natural way of storing data associated with "has many through" relationship (e.g. association time or corresponding permissions).
5.5.2.2.1. Introducing TSQLRecordMany
A dedicated class, inheriting from the standard TSQLRecord class (which is the base of all objects stored in our ORM), has been created, named TSQLRecordMany. This table will turn the "many to many" relationship into two "one to many" relationships pointing in opposite directions. It shall contain at least two TSQLRecord (i.e. INTEGER) published properties, named "Source" and "Dest" (name is mandatory, because the ORM will share for exact matches). The first pointing to the source record (the one with a TSQLRecordMany published property) and the second to the destination record.
When a TSQLRecordMany published property exists in a TSQLRecord, it is initialized automatically during TSQLRecord.Create constructor execution into a real class instance. Note that the default behavior for a TSQLRecord published property is to contain an INTEGER value which is the ID of the corresponding record - creating a "one to one" or "many to one" relationship. But TSQLRecordMany is a special case. So don't be confused! :)
This TSQLRecordMany instance is indeed available to access directly the pivot table records, via FillMany then FillRow, FillOne and FillRewind methods to loop through records, or FillManyFromDest / DestGetJoined for most advanced usage.
Here is how the regression tests are written in the SynSelfTests unit:
procedure TestMany(aClient: TSQLRestClient);
var MS: TSQLSource;
MD, MD2: TSQLDest;
i: integer;
sID, dID: array[1..100] of Integer;
res: TIntegerDynArray;
begin
MS := TSQLSource.Create;
MD := TSQLDest.Create;
try
MD.fSignatureTime := TimeLogNow;
MS.fSignatureTime := MD.fSignatureTime;
Check(MS.DestList<>nil);
Check(MS.DestList.InheritsFrom(TSQLRecordMany));
aClient.TransactionBegin(TSQLSource); // faster process
This code will create two TSQLSource / TSQLDest instances, then will begin a transaction (for faster database engine process, since there will be multiple records added at once). Note that during TSQLSource.Create execution, the presence of a TSQLRecordMany field is detected, and the DestList property is filled with an instance of TSQLDestPivot. This DestList property is therefore able to be directly used via the "has-many" dedicated methods, like ManyAdd.
for i := 1 to high(dID) dobegin
MD.fSignature := FormatUTF8('% %',[aClient.ClassName,i]);
dID[i] := aClient.Add(MD,true);
Check(dID[i]>0);
end;
This will just add some rows to the Dest table.
for i := 1 to high(sID) dobegin
MS.fSignature := FormatUTF8('% %',[aClient.ClassName,i]);
sID[i] := aClient.Add(MS,True);
Check(sID[i]>0);
MS.DestList.AssociationTime := i;
Check(MS.DestList.ManyAdd(aClient,sID[i],dID[i])); // associate both lists
Check(not MS.DestList.ManyAdd(aClient,sID[i],dID[i],true)); // no dupend;
aClient.Commit;
This will create some Source rows, and will call the ManyAdd method of the auto-created DestList instance to associate a Dest item to the Source item. The AssociationTime field of the DestList instance is set, to implement a "has many through" relationship.
Then the transaction is committed to the database.
for i := 1 to high(dID) dobeginCheck(MS.DestList.SourceGet(aClient,dID[i],res));ifnot Check(length(res)=1) then
Check(res[0]=sID[i]);
Check(MS.DestList.ManySelect(aClient,sID[i],dID[i]));
Check(MS.DestList.AssociationTime=i);
end;
This code will validate the association of Source and Dest tables, using the dedicated SourceGet method to retrieve all Source items IDs associated to the specified Dest ID, i.e. one item, matching the sID[] values. It will also check for the AssociationTime as set for the "has many through" relationship.
for i := 1 to high(sID) dobeginCheck(MS.DestList.DestGet(aClient,sID[i],res));if Check(length(res)=1) then
continue; // avoid GPF
Check(res[0]=dID[i]);
The DestGet method retrieves all Dest items IDs associated to the specified Source ID, i.e. one item, matching the dID[] values.
Check(MS.DestList.FillMany(aClient,sID[i])=1);
This will fill prepare the DestList instance with all pivot table instances matching the specified Source ID. It should return only one item.
Those lines will fill the first (and unique) prepared item, and check that Source, Dest and AssociationTime properties match the expected values. Then the next call to FillOne should fail, since only one prepared row is expected for this Source ID.
Check(MS.DestList.DestGetJoined(aClient,'',sID[i],res));ifnot Check(length(res)=1) then
Check(res[0]=dID[i]);
This will retrieve all Dest items IDs associated to the specified Source ID, with no additional WHERE condition.
This will retrieve all Dest items IDs associated to the specified Source ID, with an additional always invalid WHERE condition. It should always return no item in the res array, since SignatureTime is never equal to 0.
Check(MS.DestList.DestGetJoined(aClient,FormatUTF8('Dest.SignatureTime=?',[],[MD.SignatureTime]),sID[i],res));if Check(length(res)=1) then
continue; // avoid GPF
Check(res[0]=dID[i]);
This will retrieve all Dest items IDs associated to the specified Source ID, with an additional WHERE condition, matching the expected value. It should therefore return one item.
Note the call of the global FormatUTF8() function to get the WHERE clause. You may have written instead:
But in this case, using manual inlined :(..): values is less convenient than the '?' calling convention, especially for string (RawUTF8) values.
MD2 := MS.DestList.DestGetJoined(aClient,FormatUTF8('Dest.SignatureTime=?',[],[MD.SignatureTime]),sID[i]) as TSQLADest;if Check(MD2<>nil) then
continue;
tryCheck(MD2.FillOne);
Check(MD2.ID=dID[i]);
Check(MD2.Signature=FormatUTF8('% %',[aClient.ClassName,i]));
finally
MD2.Free;
end;
end;
This overloaded DestGetJoined method will return into MD2 a TSQLDest instance, prepared with all the Dest record content associated to the specified Source ID , with an additional WHERE condition, matching the expected value. Then FillOne will retrieve the first (and unique) matching Dest record, and checks for its values.
aClient.TransactionBegin(TSQLADests); // faster processfor i := 1 to high(sID) shr 2 doCheck(MS.DestList.ManyDelete(aClient,sID[i*4],dID[i*4]));aClient.Commit;for i := 1 to high(sID) doif i and 3<>0 thenbeginCheck(MS.DestList.ManySelect(aClient,sID[i],dID[i]));
Check(MS.DestList.AssociationTime=i);
endelse
Check(not MS.DestList.ManySelect(aClient,sID[i],dID[i]));
This code will delete one association per four, and ensure that ManySelect will retrieve only expected associations.
finally
MD.Free;
MS.Free;
end;
This will release associated memory, and also the instance of TSQLDestPivot created in the DestList property.
5.5.2.2.2. Automatic JOIN query
All those methods (ManySelect, DestGetJoined...) are used to retrieve the relations between tables from the pivot table point of view. This saves bandwidth, and can be used in most simple cases, but it is not the only way to perform requests on many-to-many relationships. And you may have several TSQLRecordMany instances in the same main record - in this case, those methods won't help you.
It is very common, in the SQL world, to create a JOINed request at the main "Source" table level, and combine records from two or more tables in a database. It creates a set that can be saved as a table or used as is. A JOIN is a means for combining fields from two or more tables by using values common to each. Writing such JOINed statements is not so easy by hand, especially because you'll have to work with several tables, and have to specify the exact fields to be retrieved; if you have several pivot tables, it may start to be a nightmare. Let's see how our ORM will handle it.
A dedicated FillPrepareMany method has been added to the TSQLRecord class, in conjunction with a new constructor named CreateAndFillPrepareMany. This particular method will:
Instantiate all Dest properties of each TSQLRecordMany instances - so that the JOINed request will be able to populate directly those values;
Create the appropriate SELECT statement, with an optional WHERE clause.
Here is the test included in our regression suite, working with the same database:
Check(MS.FillPrepareMany(aClient,
'DestList.Dest.SignatureTime<>% and id>=? and DestList.AssociationTime<>0 '+
'and SignatureTime=DestList.Dest.SignatureTime '+
'and DestList.Dest.Signature<>"DestList.AssociationTime"',[0],[sID[1]]));
Of course, the only useful parameter here is id>=? which is used to retrieve the just added relationships in the pivot table. All other conditions will always be true, but it will help testing the generated SQL.
Our mORMot will generate the following SQL statement:
select A.ID AID,A.SignatureTime A00,A.Signature A01,
B.ID BID,B.AssociationTime B02,
C.ID CID,C.SignatureTime C00,C.Signature C01
from ASource A,ADests B,ADest C
where B.Source=A.ID and B.Dest=C.ID
and (C.SignatureTime<>0 and A.id>=:(1): and B.AssociationTime<>0
and A.SignatureTime=C.SignatureTime and C.Signature<>"DestList.AssociationTime")
You can notice the following:
All declared TSQLRecordMany instances (renamed B in our case) are included in the statement, with all corresponding Dest instances (renamed as C);
Fields are aliased with short unique identifiers (AID, A01, BID, B02...), for all simple properties of every classes;
The JOIN clause is created (B.Source=A.ID and B.Dest=C.ID);
Our manual WHERE clause has been translated into proper SQL, including the table internal aliases (A,B,C) - in fact, DestList.Dest has been replaced by C, the main ID property has been declared properly as A.ID, and the "DestList.AssociationTime" text remained untouched, because it was bounded with quotes.
That is, our ORM did make all the dirty work for you! You can use Delphi-level conditions in your query, and the engine will transparently convert them into a valid SQL statement. Benefit of this will become clear in case of multiple pivot tables, which are likely to occur in real-world applications.
After the statement has been prepared, you can use the standard FillOne method to loop through all returned rows of data, and access to the JOINed columns within the Delphi objects instances:
Check(MS.FillTable.RowCount=length(sID));
for i := 1 to high(sID) dobeginMS.FillOne;
Check(MS.fID=sID[i]);
Check(MS.SignatureTime=MD.fSignatureTime);
Check(MS.DestList.AssociationTime=i);
Check(MS.DestList.Dest.fID=dID[i]);
Check(MS.DestList.Dest.SignatureTime=MD.fSignatureTime);
Check(MS.DestList.Dest.Signature=FormatUTF8('% %',[aClient.ClassName,i]));
end;
MS.FillClose;
Note that in our case, an explicit call to FillClose has been added in order to release all Dest instances created in FillPrepareMany. This call is not mandatory if you call MS.Free directly, but it is required if the same MS instance is about to use some regular many-to-many methods, like MS.DestList.ManySelect() - it will prevent any GPF exception to occur with code expecting the Dest property not to be an instance, but a pointer(DestID) value.
5.6. ORM Data Model
5.6.1. Creating an ORM Model
The TSQLModel class centralizes all TSQLRecord inherited classes used by an application, both database-related and business-logic related.
In order to follow the MVC pattern, the TSQLModel instance is to be used when you have to deal at table level. For instance, do not try to use low-level TSQLDataBase.GetTableNames or TSQLDataBase.GetFieldNames methods in your code. In fact, the tables declared in the Model may not be available in the SQLite3 database schema, but may have been defined as TSQLRestStorageInMemory instance via the TSQLRestServer.StaticDataCreate method, or being external tables - see below. You could even have a mORMot server running without any SQLite3 engine at all, but pure in-memory tables!
Each TSQLModel instance is in fact associated with a TSQLRest instance. An Owner property gives access to the current running client or server TSQLRest instance associated with this model.
By design, models are used on both Client and Server sides. It is therefore a good practice to use a common unit to define all TSQLRecord types, and have a common function to create the related TSQLModel class.
For instance, here is the corresponding function as defined in the first samples available in the source code repository (unit SampleData.pas):
function CreateSampleModel: TSQLModel;
begin
result := TSQLModel.Create([TSQLSampleRecord]);
end;
For a more complex model including link to User Interface, see below.
5.6.2. Several Models
In practice, a same TSQLRecord can be used in several models: this is typically the case for TSQLAuthUser tables, or if client and server instances are running in the same process. So, for accessing the model properties, you have two structures available:
Low-level table properties, as retrieved from below by the ORM kernel of mORMot. Access these instances from TSQLModel.TableProps[].Props
So you may use code like this:
var i: integer;
ModelProps: TSQLModelRecordProperties;
Props: TSQLRecordProperties;
begin
...
for i := 0 to high(Model.TableProps) dobegin
ModelProps := Model.TableProps[i];
// now you can access ModelProps.ExternalDB.TableName ...
Props := ModelProps.Props;
// now you can use Props.SQLTableName or Props.Fields[]end;
end;
5.6.3. Filtering and Validating
According to the n-Tier architecture - see Multi-tier architecture - data filtering and validation should be implemented in the business logic, not in the User Interface.
If you were used to develop RAD database application using Delphi, you may have to change a bit your habits here. Data filtering and validation should be implemented not in the User Interface, but in pure Delphi code.
In order to make this easy, a dedicated set of classes are available in the SynCommons.pas unit, and allow to define both filtering (transformation) and validation. They all will be children of any of those both classes:
Filtering and Validation classes hierarchyTSQLRecord field content filtering is handled in the TSQLRecord. Filter virtual method, or via some TSQLFilter classes. They will transform the object fields following some rules, e.g. forcing uppercase/lowercase, or trimming text spaces.
TSQLRecord field content validation is handled in the TSQLRecord. Validate virtual method, or via some TSQLValidate classes. Here the object fields will be checked against a set of rules, and report any invalid content.
Some "standard" classes are already defined in the SynCommons.pas and mORMot.pas units, covering most common usage:
Default filters and Validation classes hierarchyYou have powerful validation classes for IP Address, Email (with TLD+domain name), simple regex pattern, textual validation, strong password validation...
It does make sense to define this behavior within the TSQLRecord definition, so that it will be shared by all models.
If you want to perform some text field length validation or filter at ORM level, you may use TSQLRecordProperties's SetMaxLengthValidatorForTextFields() or SetMaxLengthFilterForTextFields() method, or at model level:
function CreateModel: TSQLModel;
begin
result := TSQLModel.Create([TSQLMyRecord1,TSQLMyRecord2]);
result.SetMaxLengthValidatorForAllTextFields(true); // "index n" is in UTF-8 bytesend;
In order to perform the filtering (transformation) of some content, you'll have to call the aRecord.Filter() method, and aRecord.Validate() to test for valid content.
For instance, this is how mORMotUIEdit.pas unit filters and validates the user interface input:
procedureTRecordEditForm.BtnSaveClick(Sender: TObject);
(...)
// perform all registered filteringRec.Filter(ModifiedFields);// perform content validation
FieldIndex := -1;
ErrMsg := Rec.Validate(Client,ModifiedFields,@FieldIndex);if ErrMsg<>'' thenbegin// invalid field content -> show message, focus component and abort savingif cardinal(FieldIndex)<cardinal(length(fFieldComponents)) thenbegin
C := fFieldComponents[FieldIndex];
C.SetFocus;
Application.ProcessMessages;
ShowMessage(ErrMsg,format(sInvalidFieldN,[fFieldCaption[FieldIndex]]),true);
endelseShowMessage(ErrMsg,format(sInvalidFieldN,['?']),true);
endelse// close window on success
ModalResult := mrOk;
end;
It is up to your code to filter and validate the record content. By default, the mORMotCRUD operations won't call the registered filters or validators.
5.7. ORM Cache
Here is the definition of "cache", as stated by Wikipedia:
In computer engineering, a cache is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere. If requested data is contained in the cache (cache hit), this request can be served by simply reading the cache, which is comparatively faster. Otherwise (cache miss), the data has to be recomputed or fetched from its original storage location, which is comparatively slower. Hence, the greater the number of requests that can be served from the cache, the faster the overall system performance becomes.
To be cost efficient and to enable an efficient use of data, caches are relatively small. Nevertheless, caches have proven themselves in many areas of computing because access patterns in typical computer applications have locality of reference. References exhibit temporal locality if data is requested again that has been recently requested already. References exhibit spatial locality if data is requested that is physically stored close to data that has been requested already.
In our ORM framework, since performance was one of our goals since the beginning, cache has been implemented at four levels:
Statement cache for implementing SQL prepared statements, and parameters bound on the fly - see Query parameters and below - note that this cache is available not only for the SQlite3 database engine, but also for any external engine - see below;
Global JSON result cache at the database level, which is flushed globally on any INSERT / UPDATE - see below;
Tuned record cache at the CRUD/RESTful level for specified tables or records on the server side - see below;
Tuned record cache at the CRUD/RESTful level for specified tables or records on the client side - see below.
Thanks to those specific caching abilities, our framework is able to minimize the number of client-server requests, therefore spare bandwidth and network access, and scales well in a concurrent rich client access architecture. In such perspective, a Client-Server ORM does make sense, and is of huge benefit in comparison to a basic ORM used only for data persistence and automated SQL generation.
5.8. Calculated fields
It is often useful to handle some calculated fields. That is, having some field values computed when you set another field value. For instance, if you set an error code from an enumeration (stored in an INTEGER field), you may want the corresponding text (to be stored on a TEXT field). Or you may want a total amount to be computed automatically from some detailed records.
This should not be done on the Server side. In fact, the framework expects the transmitted JSON transmitted from client to be set directly to the database layer, as stated by this code from the mORMotSQLite3 unit:
functionTSQLRestServerDB.EngineUpdate(Table: TSQLRecordClass; ID: TID;
const SentData: RawUTF8): boolean;
beginif (self=nil) or (Table=nil) or (ID<=0) then
result := false elsebegin// this SQL statement use :(inlined params): for all values
result := ExecuteFmt('UPDATE % SET % WHERE RowID=:(%):;',
[Table.RecordProps.SQLTableName,GetJSONObjectAsSQL(SentData,true,true),ID]);if Assigned(OnUpdateEvent) then
OnUpdateEvent(self,seUpdate,Table,ID);
end;
end;
The direct conversion from the received JSON content into the SQL UPDATE statement values is performed very quickly via the GetJSONObjectAsSQL procedure. It won't use any intermediary TSQLRecord, so there will be no server-side field calculation possible.
Record-level calculated fields should be done on the Client side, using some setters.
There are at least three ways of updating field values before sending to the server:
Either by using some dedicated setters method for TSQLRecord properties;
Or by overriding the ComputeFieldsBeforeWrite virtual method of TSQLRecord.
If the computed fields need a more complex implementation (e.g. if some properties of another record should be modified), a dedicated RESTful service should be implemented - see below.
5.8.1. Setter for TSQLRecord
For instance, here we define a new table named INVOICE, with only two fields. A dynamic array containing the invoice details, then a field with the total amount. The dynamic array property will be stored as BLOB into the database, and no additional Master/Detail table will be necessary.
Note that the Total property does not have any setter (aka write statement). So it will be read-only, from the ORM point of view. In fact, the following protected method will compute the Total property content from the Details property values, when they will be modified:
procedure TSQLInvoice.SetDetails(const Value: TInvoiceRecs);
var i: integer;
begin
fDetails := Value;
fTotal := 0;
for i := 0 to high(Value) do
fTotal := fTotal+Value[i].Amount;
end;
When the object content will be sent to the Server, the Total value of the JSON content sent will contain the expected value.
Note that with this implementation, the SetDetails must be called explicitly. That is, you should not only modify directly the Details[] array content, but either use a temporary array during edition then assign its value to Invoice.Details, or force the update with a line of code like:
Invoice.Details := Invoice.Details; // force Total calculation
5.8.2. TSQLRecord.ComputeFieldsBeforeWrite
Even if a TSQLRecord instance should not normally have access to the TSQLRest level, according to OOP principles, the following virtual method have been defined:
It will be called automatically on the Client side, just before a TSQLRecord content will be sent to the remote server, before adding or update.
In fact, the TSQLRestClientURI.Add / Update / BatchAdd / BatchUpdate methods will call this method before calling TSQLRecord.GetJSONValues and send the JSON content to the server.
On the Server-side, in case of some business logic involving the ORM, the TSQLRestServer.Add / Update methods will also call ComputeFieldsBeforeWrite.
By default, this method will compute the TModTime / sftModTime and TCreateTime / sftCreateTime properties value from the current server time stamp, as such:
procedureTSQLRecord.ComputeFieldsBeforeWrite(aRest: TSQLRest; aOccasion: TSQLEvent);
var F: integer;
beginif (self<>nil) and (aRest<>nil) thenwith RecordProps dobeginif HasModTimeFields thenfor F := 0 to high(FieldType) doif FieldType[f]=sftModTime thenSetInt64Prop(Self,Fields[F],aRest.ServerTimestamp);if HasCreateTimeField and (aOccasion=seAdd) thenfor F := 0 to high(FieldType) doif FieldType[f]=sftCreateTime thenSetInt64Prop(Self,Fields[F],aRest.ServerTimestamp);end;
end;
You may override this method for you own purpose, saved the fact that you call this inherited implementation to properly handle TModTime and TCreateTimepublished properties.
5.9. Audit Trail for change tracking
Since most CRUD operations are centered within the scope of our mORMot server, we implemented in the ORM an integrated mean of tracking changes (aka Audit Trail) of any TSQLRecord.
Keeping a track of the history of business objects is one very common need for software modeling, and a must-have for any accurate data modeling, like Domain-Driven Design. By default, as expected by the OOP model, any change to an object will forget any previous state of this object. But thanks to mORMot's exclusive change-tracking feature, you can persist the history of your objects.
5.9.1. Enabling audit-trail
By default, change-tracking feature will be disabled, saving performance and disk use. But you can enable change tracking for any class, by calling the following method, on server side:
aServer.TrackChanges([TSQLInvoice]);
This single line will let aServer: TSQLRestServer monitor all CRUD operations, and store all changes of the TSQLInvoice table within a TSQLRecordHistory table.
Since all content changes will be stored in this single table by default (note that the TrackChanges() method accepts an array of classes as parameters, and can be called several times), it may be handy to define several tables for history storage. Later on, an external database engine may be defined to store history, e.g. on cheap hardware (and big hard drives), whereas your main database may be powered by high-end hardware (and smaller SSDs) - see below. To do so, you define your custom class for history storage, then supply it as parameter:
type
TSQLRecordSecondaryHistory = class(TSQLRecordHistory);
(...)
aServer.TrackChanges([TSQLInvoice],TSQLRecordSecondaryHistory);
Then, all history will be stored in this TSQLRecordSecondaryHistory class (in its own table named SecondaryHistory), and not the default TSQLRecordHistory class (in its History table).
5.9.2. A true Time Machine for your objects
Once the object changes are tracked, you can later on browse the history of the object, by using the TSQLRecordHistory.CreateHistory(), then HistoryGetLast, HistoryCount, and HistoryGet() methods:
var aHist: TSQLRecordHistory;
aInvoice: TSQLInvoice;
aEvent: TSQLHistoryEvent; // will be either heAdd, heUpdate or heDelete
aTimestamp: TModTime;
(...)
aInvoice := TSQLInvoice.Create;
aHist := TSQLRecordHistory.CreateHistory(aClient,TSQLInvoice,400);
try
writeln('Number of items in the record history: ',aHist.HistoryCount);
for i := 0 to aHist.HistoryCount-1 dobegin
aHist.HistoryGet(i,aEvent,aTimestamp,aInvoice);
writeln;
writeln('Event: ',GetEnumName(TypeInfo(TSQLHistoryEvent),ord(aEvent))^);
writeln('Timestamp: ',TTimeLogBits(aTimestamp).ToText);
writeln('Identifier: ',aInvoice.Number);
writeln('Value: ',aInvoice.GetJSONValues(true,true,soSelect));
end;
finally
aHist.Free;
aInvoice.Free;
end;
By design, direct SQL changes are not handled. If you run some SQL statements like DELETE FROM ... or UPDATE ... SET ... within your application or from any external program, then the History table won't be updated. In fact, the ORM does not set any DB trigger to track low-level changes: it will slow down the process, and void the persistence agnosticism paradigm we want to follow, e.g. allowing to use a NoSQL database like MongoDB.
When the history grows, the JSON content may become huge, and fill the disk space with a lot of duplicated content. In order to save disk space, when a record reaches a define number of JSON data rows, all this JSON content is gathered and compressed into a BLOB, in TSQLRecordHistory.History. You can force this packing process by calling TSQLRestServer.TrackChangesFlush() manually in your code. Calling this method will also have a welcome side effect: it will read the actual content of the record from the database, then add a fake heUpdate event in the history if the field values do not match the one computed from tracked changes, to ensure that the audit trail will be correct. As a consequence, history will become always synchronized with the actual data persisted in the database, even if external SQL did by-pass the CRUD methods of the ORM, via unsafe DELETE FROM ... or UPDATE ... SET ... statements.
You can tune how packing is defined for a given TSQLRecord table, by using some optional parameters to the registering method:
Take a look at the documentation of this method (or the comments in its declaration code) for further information. Default options will let TSQLRestServer.TrackChangesFlush() be called after 1000 individual TSQLRecordHistory.SentDataJSON rows are stored, then will compress them into a BLOB once 10 JSON rows are available for a given record, ensuring that the uncompressed BLOB size for a single record won't use more than 64 KB of memory (but probably much less in the database, since it is stored with very high compression rate).
5.10. Master/slave replication
As stated during TSQLRecord fields definition, the ORM is able to maintain a revision number for any TSQLRecord table, so that it the table may be easily synchronized remotely by another TSQLRestServer instance. If you define a TRecordVersion published property, the ORM core will fill this field just before any write with a monotonically increasing revision number, and will take care of any deletion, so that those modifications may be replayed later on any other database.
This synchronization will work as a strict master/slave replication scheme, as a one-way on demand refresh of a replicated table. Each write operation on the master database on a given table may be easily reflected on one or several slave databases, with almost no speed nor storage size penalty.
In addition to this on demand synchronization, a real-time notification mechanism, using WebSockets communication - see below - may be defined.
5.10.1. Enable synchronization
In order to enable this replication mechanism, you should define a TRecordVersion published property in the TSQLRecord class type definition:
Only a single TRecordVersion field is allowed per TSQLRecord class - it will not mean anything to manage more than one field of this type.
Note that this field will be somewhat "hidden" to most ORM process: a regular TSQLRest.Retrieve won't fill this Version property, since it is an internal implementation detail. If you want to lookup its value, you will have to explicitly state its field name at retrieval. Any TRecordVersion is indeed considered as a "non simple field", just like BLOB fields, so will need explicit retrieval of its value.
In practice, any TSQLRest.Add and TSQLRest.Update on this TSQLRecordPeopleVersioned class will increase this Version revision number field, and a TSQLRest.Delete will populate an external TSQLRecordTableDelete table with the ID of the deleted record, associated with a TRecordVersion revision.
The TSQLRecordTableDelete table should be part of the TSQLModel, in conjunction with TSQLRecordPeopleVersioned;
If the TSQLRecordTableDelete table is not part of the TSQLModel, the TSQLRestServer will add it - but you should better make it explicitly appearing in the data model;
A single TSQLRecordTableDelete table will maintain the list of all deleted data rows, of all tables containing a TRecordVersion published field;
The TSQLRecordPeopleVersioned table appearance order in the TSQLModel will matter, since TSQLRecordTableDelete.ID will use this table index order in the database model to identify the table type of the deleted row - in a similar way to TRecordReference and TRecordReferenceToBeDeleted.
All the synchronization preparation will be taken care by the ORM kernel on its own, during any write operation. There is nothing particular to maintain or setup, in addition to this TRecordVersion field definition, and the global TSQLRecordTableDelete table.
5.10.2. From master to slave
To replicate this TSQLRecordPeopleVersioned table from another TSQLRestServer instance, just call the following method:
This single line will request a remote server via a Client: TSQLRestClientURI connection (which may be over HTTP) for any pending modifications since its last call, then will fill the local aServer: TSQLRestServer database so that the local TSQLRecordPeopleVersioned table will contain the very same content as the remote master TSQLRestServer.
Using a TTimer may increase responsiveness of a client application, and allow refresh of displayed data, with limited resources (e.g. with a 500 ms period, on a given screen).
Only the modified data will be transmitted over the wire, as two REST/JSON queries (one for the insertions/updates, another for the deletions), and all the local write process will use optimized BATCH writing - see below. This means that the synchronization process will try to use as minimal bandwidth and resources as possible, on both sides.
In practice, you may define the Master side as such:
Of course, the model should match for both MasterServer and MasterClient instances. This is why we used the same MasterModel variable name (probably defined in a shared unit).
Assuming that the slave database has been defined as such:
This command will process the replication as such:
ORM Replication Classes via RESTOf course, the slaves should be considered as read-only, otherwise the version numbers may conflict, and the whole synchronization may become a failure.
But you can safely replicate servers in cascade, if needed: the version numbers will be propagated from masters to slaves, and the data will always be in a consistent way.
ORM Cascaded Replication Classes via RESTThis cascading Master/Slave replication design may be used in conjunction with the CQRS pattern (Command Query Responsibility Segregation). In fact, the Slave 2 database may be a local read-only database instance, used only for reporting purposes, e.g. by marketing or management people, whereas the Slave 1 may be the active read-only database, on which all local business process will read their data. As such, the Slave 2 instance may be replicated much less often than than Slave 1 database - which may be even be replicated in real time, as we will now see.
5.10.3. Real-time synchronization
Sometimes, the on-demand synchronization is not enough.
For instance, you may need to:
Synchronize a short list of always evolving items which should be reflected as soon as possible;
Involve some kind of ACID-like behavior (e.g. handle money!) in your replicated data;
Replicate not from a GUI application, but from a service, so use of a TTimer is not an option;
Combine REST requests (for ORM or services) and master/slave ORM replication on the same wire, e.g. in a multi-threaded application;
Use an Event Oriented Persistence, and expect to be notified from any change of state - see below.
The first requirement is to allow WebSockets on your Master HTTP server, so initialize the TSQLHttpServer class as a useBidirSocket kind of server - see below:
Of course, the model should match for both MasterServer and MasterClient instances. As the WebSockets protocol definition - here above the same 'PrivateAESEncryptionKey' private key.
Then you enable the real-time replication service on the Master side:
MasterServer.RecordVersionSynchronizeMasterStart;
In practice, it will publish a IServiceRecordVersioninterface-based service on the server side - see below.
Assuming that the slave database has been defined as such:
(in this case, the SlaveModel may not be the same as the MasterModel, but TSQLRecordPeopleVersioned should be part of both models) Then you can initiate real-time replication from the slave side with a single line, for a given table:
The above command will subscribe to the remote MasterSlave replication service (i.e. IServiceRecordVersioninterface), to receive any change concerning the TSQLRecordPeopleVersioned ORM table, using the MasterClient connection via WebSockets, and persist all updates into the local SlaveServer database.
To stop the real-time notification for this ORM table, you could execute:
Even if you do not call RecordVersionSynchronizeSlaveStop(), the replication will be stopped when the main SlaveServer instance will be released, and the MasterServer be unsubscribe this connection for its internal notification list.
This typical replication may be represented as such:
ORM Real-Time Replication ClassesThe real-time notification details have been tuned, to consume as minimum bandwidth and resources as possible. For instance, if several modifications are to be notified on a slave connection in a short amount of time, the master is able to gather those modifications as a single WebSockets frame, which will be applied as a whole to the slave database, in a single BATCH transaction - see below.
5.10.4. Replication use cases
We may consider a very common corporate infrastructure:
Corporate Servers ReplicationThis kind of installation, with a main central office, and a network of local offices, will benefit from this master/slave replication. Simple redirection may be used - see below - but it will expect the work to continue, even in case of Internet network failure. REST redirection will expect a 100% connection up-link, which may be critical in some cases.
You could therefore implement replication in several ways:
Either the main office is the master, and any write will be push to the Main Server, whereas local offices will have a replicated copy of the information - drawback is that in case of network failure, the local office will be limited to read only data access;
Corporate Servers Master/Slave Replication With All Data On Main Server
Or each local office may host its own data in a dedicated table, synchronized as a master database; the main office will replicate (as a slave) the private data of each local server; in addition, all this data gathered by the Main Server may be further replicated to the other local offices, and be still accessible in read mode - in case of network failure, all the data is available on local servers, and the local private table is still writable.
Corporate Servers Master/Slave Replication With Private Local Data
Of course, the second solution seems preferable, even if a bit more difficult to implement. The ablity of all local offices to work offline on their own private data, but still having all the other data accessible as read-only, will be a huge ROI.
As a benefit of using replication, the central main server will be less stressed, since most of the process will take place in local servers, and the main office server will only be used for shared data backup and read-only gathering of the other local databases. Only a small network bandwith will be necessary (much less than a pure web solution), and CPU/storage resources will be minimal.
If needed, the Real-time synchronization will allow to have the main office data replicated in "near real-time" in the local offices databases, whereas the write operations will still safely take place on the main Office. Another cascading replication may take place within any node, with a on-demand refresh, e.g. a 1 hour period, to implement the CQRS pattern (Command Query Responsibility Segregation).
Corporate Servers Master/Slave Replication With CQRSFollowing the CQRS pattern, some demanding Queries may take place in those read-only "Reporting" replicated databases, without impacting the main local databases, in which all actual "Business" will take place.
6. Daily ORM
Adopt a mORMotWhen you compare ORM and standard SQL, some aspects must be highlighted.
First, you do not have to worry about field orders and names, and can use field completion in the IDE. It is much more convenient to type Baby. then select the Name property, and access to its value.
The ORM code is much more readable than the SQL. You do not have to switch your mind from one syntax to another, in your code. Because SQL is a true language (see SQL Is A High-Level Scripting Language at http://www.fossil-scm.org/index.html/doc/tip/www/theory1.wiki ). You can even forget about the SQL itself for most projects; only some performance-related or complex queries should be written in SQL, but you will avoid it most of the time. Think object pascal. And happy coding. Your software architecture will thank you for it.
Another good impact is the naming consistency. For example, what about if you want to rename your table? Just change the class definition, and your IDE will do all refactoring for you, without any risk of missing a hidden SQL statement anywhere. Do you want to rename or delete a field? Change the class definition, and the Delphi compiler will let you know all places where this property was used in your code. Do you want to add a field to an existing database? Just add the property definition, and the framework will create the missing field in the database schema for you.
Another risk-related improvement is about the strong type checking, included into the Delphi language during compile time, and only during execution time for the SQL. You will avoid most runtime exceptions for your database access: your clients will thank you for that. In one word, forget about field typing mismatch or wrong type assignment in your database tables. Strong typing is great in such cases for code SQA, and if you worked with some scripting languages (like JavaScript, Python or Ruby), you should have wished to have this feature in your project!
It is worth noting that our framework allows writing triggers and stored procedures (or like stored procedures) in Delphi code, and can create key indexing and perform foreign key checking in class definition.
Another interesting feature is the enhanced Grid component supplied with this framework, and the AJAX-ready orientation, by using natively JSON flows for Client-Server data streaming. The REST protocol can be used in most application, since the framework provides you with an easy to use "Refresh" and caching mechanism. You can even work off line, with a local database replication of the remote data.
For Client-Server - see below - you do not have to open a connection to the database, just create an instance of a TSQLRestClient object (with the communication layer you want to use: direct access, Windows Messages, named pipe or HTTP), and use it as any normal Delphi object. All the SQL coding or communication and error handling will be done by the framework. The same code can be used in the Client or Server side: the parent TSQLRest object is available on both sides, and its properties and methods are strong enough to access the data.
6.1. ORM is not Database
It is worth emphasizing that you should not think about the ORM like a mapping of an existing DB schema. This is an usual mistake in ORM design.
The database is just one way of your objects persistence:
Don't think about tables with simple types (text/number...), but objects with high level types;
Don't think about Master/Detail, but logical units;
Don't think "SQL", think about classes;
Don't wonder "How will I store it?", but "Which data do I need?".
For instance, don't be tempted to always create a pivot table (via a TSQLRecordMany property), but consider using a dynamic array, TPersistent, TStrings or TCollectionpublished properties instead.
Or consider that you can use a TRecordReference property pointing to any registered class of the TSQLModel, instead of creating one TSQLRecord property per potential table.
The mORMot framework is even able to persist the object without any SQL database, e.g. via TSQLRestStorageInMemory. In fact, its ORM core is optimized but not tied to SQL.
6.1.1. Objects, not tables
With an ORM, you should usually define fewer tables than in a "regular" relational database, because you can use the high-level type of the TSQLRecord properties to handle some per-row data.
The first point, which may be shocking for a database architect, is that you should better not create Master/Detail tables, but just one "master" object with the details stored within, as JSON, via dynamic array, TPersistent, TStrings or TCollection properties.
Another point is that a table is not to be created for every aspect of your software configuration. Let's confess that some DB architects design one configuration table per module or per data table. In an ORM, you could design a configuration class, then use the unique corresponding table to store all configuration encoded as some JSON data, or some DFM-like data. And do not hesitate to separate the configuration from the data, for all not data-related configuration - see e.g. how the mORMotOptions unit works. With our framework, you can serialize directly any TSQLRecord or TPersistent instance into JSON, without the need of adding this TSQLRecord to the TSQLModel list. Since revision 1.13 of the framework, you can even define TPersistent published properties in your TSQLRecord class, and it will be automatically serialized as TEXT in the database.
6.1.2. Methods, not SQL
At first, you should be tempted to write code as such (this code sample was posted on our forum, and is not bad code, just not using the ORM orientation of the framework):
DrivesModel := CreateDrivesModel();
GlobalClient := TSQLRestClientDB.Create(DrivesModel, CreateDrivesModel(), 'drives.sqlite', TSQLRestServerDB);
TSQLRestClientDB(GlobalClient).Server.DB.Execute(
'CREATE TABLE IF NOT EXISTS drives ' +
'(id INTEGER PRIMARY KEY, drive TEXT NOT NULL UNIQUE COLLATE NOCASE);');
for X := 'A' to 'Z' dobeginTSQLRestClientDB(GlobalClient).Server.DB.Execute(
'INSERT OR IGNORE INTO drives (drive) VALUES ("' + StringToUTF8(X) + ':")');
end;
Please, don't do that!
The correct ORM-oriented implementation should be the following:
In the above lines, no SQL was written. It is up to the ORM to:
Create all missing tables, via the CreateMissingTables method - and not compute by hand a "CREATE TABLE IF NOT EXISTS..." SQL statement;
Check if there is some rows of data, via the TableRowCount method - instead of a "SELECT COUNT(*) FROM DRIVES";
Append some data using an high-level TDrivesDelphi instance and the Add method - and not any "INSERT OR IGNORE INTO DRIVES...".
Then, in order to retrieve some data, you'll be tempted to code something like that (extracted from the same forum article):
procedure TMyClient.FillDrives(aList: TStrings);
var
table: TSQLTableJSON;
X, FieldIndex: Integer;
begin
table := TSQLRestClientDB(GlobalClient).ExecuteList([TSQLDrives], 'SELECT * FROM drives');
if (table <> nil) thentry
FieldIndex := table.FieldIndex('drive');
if (FieldIndex >= 0) thenfor X := 1 to table.RowCount do
Items.Add(UTF8ToString(table.GetU(X, FieldIndex)));
finally
table.Free;
end;
end;
Thanks to the TSQLTableJSON class, code is somewhat easy to follow. Using a temporary FieldIndex variable make also it fast inside the loop execution.
But it could also be coded as such, using the CreateAndFillPrepare then FillOne method in a loop:
procedure TMyClient.FillDrives(aList: TStrings);
begin
aList.BeginUpdate;
try
aList.Clear;
with TSQLDrives.CreateAndFillPrepare(GlobalClient,'') dotrywhile FillOne doaList.Add(UTF8ToString(Drive));finally
Free;
end;
finally
aList.EndUpdate;
end;
end;
We even added the BeginUpdate / EndUpdate VCL methods, to have even cleaner and faster code (if you work with a TListBox e.g.).
Note that in the above code, an hidden TSQLTableJSON class is created in order to retrieve the data from the server. The abstraction introduced by the ORM methods makes the code not slowest, but less error-prone (e.g. Drive is now a RawUTF8 property), and easier to understand.
But ORM is not perfect in all cases.
For instance, if the Drive field is the only column content to retrieve, it could make sense to ask only for this very column. One drawback of the CreateAndFillPrepare method is that, by default, it retrieves all columns content from the server, even if you need only one. This is a common potential issue of an ORM: since the library doesn't know which data is needed, it will retrieve all object data, which is some cases is not worth it.
You can specify the optional aCustomFieldsCSV parameter as such, in order to retrieve only the Drive property content, and potentially save some bandwidth:
with TSQLDrives.CreateAndFillPrepare(GlobalClient,'','Drive') do
Note that for this particular case, you have an even more high-level method, handling directly a TStrings property as the recipient:
The whole query is made in one line, with no SELECT statement to write.
For a particular ID range, you may have written, with a specific WHERE clause using a prepared statement:
GlobalClients.OneFieldValues(TSQLDrives,'drive','ID>=? AND ID<=?',[],[aFirstID,aLastID],aList);
It is certainly worth reading all the (verbose) interface part of the mORMot.pas unit, e.g. the TSQLRest class, to make your own idea about all the high-level methods available. In the following pages, you'll find all needed documentation about this particular unit. Since our framework is used in real applications, most useful methods should already have been made available. If you need additional high-level features, feel free to ask for them, if possible with source code sample, in our forum, freely available at https://synopse.info
6.1.3. Think multi-tier
And do not forget the framework is able to have several level of objects, thanks to our Client-Server architecture - see below. Such usage is not only possible, but strongly encouraged.
You should have business-logic level objects at the Client side. Then both business-logic and DB objects at the Server side.
If you have a very specific database schema, business-logic objects can be of very high level, encapsulating some SQL views for reading, and accessed via some RESTful service commands for writing - see below.
Another possibility to access your high-level type, is to use either custom SQLite3SQL functions or stored procedures - see below - both coded in Delphi.
6.2. One ORM to rule them all
Just before entering deeper into the mORMot material in the following pages (Database layer, Client-Server, Services), you may find out that this implementation may sounds restricted.
Some common (and founded) criticisms are the following (quoting from our forum):
"One of the things I don't like so much about your approach to the ORM is the mis-use of existing Delphi constructs like "index n" attribute for the maximum length of a string-property. Other ORMs solve this i.e. with official Class-attributes";
"You have to inherit from TSQLRecord, and can't persist any plain class";
"There is no way to easily map an existing complex database".
Those concerns are pretty understandable. Our mORMot framework is not meant to fit any purpose, but it is worth understanding why it has been implemented as such, and why it may be quite unique within the family of ORMs - which almost all are following the Hibernate way of doing.
6.2.1. Rude class definition
Attributes do appear in Delphi 2010, and it is worth saying that FPC has an alternative syntax. Older versions of Delphi (still very deployed) do not have attributes available in the language, so it was not possible to be compatible with Delphi 6 up to latest versions (as we wished for our units).
It is perfectly right to speak about 'mis-use of index' - but this was the easiest and only way we found out to have such information, just using RTTI. Since this parameter was ignored and not used for most classes, it was re-used (also for dynamic array properties, to have faster lookup). There is another "mis-use" for the "stored AS_UNIQUE" property, which is used to identify unique mandatory columns.
Using attributes is one of the most common ways of describing tables in most ORMs. On the other hand, some coders have a concern about such class definitions. They are mixing DB and logic: you are somewhat polluting the business-level class definition with DB-related stuff.
That is why other kind of ORMs provide a way of mapping classes to tables using external files (some ORMs provide both ways of definition). And why those days, even code gurus identified the attributes overuse as a potential weakness of code maintainability. Attributes do have a huge downsize, when you are dealing with a Client-Server ORM, like ours: on the Client side, those attributes are pointless (client does not need to know anything about the database), and you need to link to all the DB plumbing code to your application. For mORMot, it was some kind of strong argument.
For the very same reasons, the column definitions (uniqueness, indexes, required) are managed in mORMot at two levels:
At ORM level for DB related stuff (like indexes, which is a DB feature, not a business feature);
At Model level for Business related stuff (like uniqueness, validators and filters).
When you take a look at the supplied validators and filters - see Filtering and Validating - you'll find out that this is much powerful than the attributes available in "classic" ORMs: how could you validate an entry to be an email, or to match a pattern, or to ensure that it will be stored in uppercase within the DB?
Other question worth asking is about the security. If you access the data remotely, a global access to the DB is certainly not enough. Our framework handle per-table CRUD level access for its ORM, above the DB layer (and has also complete security attributes for services) - see below. It works however the underneath DB grants are defined (even an DB with no user rights - like in-memory or SQLite3 is able to do it).
The mORMot point of view (which is not the only one), is to let the DB persist the data, as safe and efficient as possible, but rely on higher levels layers to implement the business logic. The framework favors convention over configuration, which is known to save a lot of time (if you use WCF on a daily basis, as I do, you and your support team know about the .config syndrome). It will make it pretty database-agnostic (you can even not use a SQL database at all), and will make the framework code easier to debug and maintain, since we don't have to deal with all the DB engine particularities. In short, this is the REST point of view, and main cause of success: CRUD is enough in any KISS-friendly design.
6.2.2. Persist TSQLRecord, not any class
About the fact that you need to inherit from TSQLRecord, and can't persist any PODO (Plain Old Delphi Object), our purpose was in fact very similar to the "Layer Supertype" pattern of Domain-Driven-Design, as explained by Martin Fowler: It is not uncommon for all the objects in a layer to have methods you don't want to have duplicated throughout the system. You can move all of this behavior into a common Layer Supertype.
In fact, for TSQLRecord / TSQLRest / ORM remote access, you have already all Client-Server CRUD operations available. Those classes are abstract common Supertypes, ready to be used in your projects. It has been optimized a lot (e.g. with a cache and other nice features), so I do not think reinventing a CRUD / database service is worth the prize. You have secure access to the ORM classes, with user/group attributes. Almost everything is created by code, just from the TSQLRecord class definition, via RTTI. So it may be faster (and safer) to rely on it, than defining all your class hierarchy by hand.
To be fair, most DDD frameworks for Java or C# expect e.g. Entities classes to inherit from a given Entity class, or add class attributes to the POJO/POCO to define the persistence details. So we are not the single one in this case!
But the concern of not being able to persist any class (it needs to inherit from TSQLRecord) does perfectly make sense. Especially in the context of DDD modeling, where the DDD objects will benefit from being uncoupled from the framework, which may pollute the domain logic. All those expectations tend to break the Persistence Ignorance principle, as requested by DDD patterns.
This is why we added to the framework the ability to persist any plain class, using Repository services, but still using the ORM under the hood, for the actual persistence on any SQL or NoSQL database engine. The TSQLRecord can be generated from any Delphi persistent class, then an automated mapping is maintained by mORMot between both class instances. Data access is then defined as clean CQRS Repository Services. For instance, a TUser class may be persisted via such a service:
Here, the write operations are defined in a IDomUserCommand service, which is separated (but inherits) from IDomUserQuery, which is used for read operations. See below for more details about this feature.
6.2.3. Several ORMs at once
To be clear, mORMot offers several kind of table definitions:
Via TSQLRecord / TSQLRecordVirtual "native ORM" classes: data storage is using either fast in-memory lists via TSQLRestStorageInMemory, or SQLite3 tables (in memory, on file, or virtual). In this case, we do not use index for strings (column length is not used by any of those engines).
Via TSQLRecord "external ORM-managed" classes: after registration via a call to the VirtualTableExternalRegister() / VirtualTableExternalMap() functions, external DB tables are created and managed by the ORM, via SQL - see below. These classes will allow creation of tables in any supported database engine - currently SQLite3, Oracle, Jet/MSAccess, MS SQL, Firebird, DB2, PostgreSQL, MySQL, Informix and NexusDB - via whatever OleDB, ODBC / ZDBC provider, or any DB.pas unit). For the "external ORM-managed" TSQLRecord type definitions, the ORM expects to find an index attribute for any text column length (i.e. RawUTF8 or string published properties). This is the only needed parameter to be defined for such a basic implementation, in regard to TSQLRecord kind of classes. Then can specify addition field/column mapping, if needed.
Via TSQLRecordMappedAutoID / TSQLRecordMappedForcedID "external mapped" classes: DB tables are not created by the ORM, but already existing in the DB, with sometimes a very complex layout. This feature is not yet implemented, but on the road-map. For this kind of classes we won't probably use attributes, nor even external files, but we will rely on definition from code, either with a fluent definition, or with dedicated classes (or interface).
Via any kind of Delphi class, mapped to their internal TSQLRecord class, using CQRS Repository Services as presented below.
Why have several database back-end at the same time?
Most of the existing software architecture rely on one dedicated database per domain, since it is more convenient to administrate one single server. But there are some cases when it does make sense to have several databases at once.
In practice, when your data starts to grow, you may need to archive older data in a dedicated remote database, e.g. using cheap storage (bunch of Hard Drives in RAID). Since this data will be seldom retrieved, it is not an issue to have slower access time. And you will be able to keep your most recent data accessible in a local high-speed engine (running on SSD).
Another pattern is to use dedicated consolidation DBs for any analysis. In fact, SQL normalization is good for most common relation work, but sometimes denormalization is necessary, e.g. for statistic or business analyse purposes. In this case, dedicated consolidation databases, containing the data already prepared and indexed in a ready-to-use denormalized layout.
Last but not least, some Event Sourcing architectures even expect several DB back-end at once:
It will store the status on one database (e.g. high-performance in-memory) for most common requests to be immediate;
And store the modification events in another ACID database (e.g. SQLite3, Oracle, Jet/MSAccess, MS SQL, Firebird, DB2, PostgreSQL, MySQL, Informix or NexusDB), even a high-speed NoSQL engine like MongoDB.
It is possible to directly access ORM objects remotely (e.g. the consolidation DB), mostly in a read-only way, for dedicated reporting, e.g. from consolidated data - this is one potential CQRS implementation pattern with mORMot. Thanks to the framework security, remote access will be safe: your clients won't be able to change the consolidation DB content!
As can be easily guessed, such design models are far away from a basic ORM built only for class persistence. And mORMot's ORM/ODM offers you all those possibilities.
6.2.4. The best ORM is the one you need
Therefore, we may sum up some potential use of ORM, depending of your intent:
If you want to persist some data objects (not tied to complex business logic), the framework's ORM will be a light and fast candidate, targetting SQLite3, Oracle, Jet/MSAccess, MS SQL, Firebird, DB2, PostgreSQL, MySQL, Informix, NexusDB databases, or even with no SQL engine, using TSQLRestStorageInMemory class which is able to persist its content in small files - see below;
If your understanding of ORM is just to persist some existing objects with associated business code, mORMot could help you, thanks to its Repository Services automatically generated over TSQLRecord, as presented below;
If you want a very fast low-level Client-Server layer, mORMot is a first class candidate: we identified that some users are using the built-in JSON serialization and HTTP server features to create their application, using a RESTful/SOA architecture - see below and below;
If your expectation is to map an existing complex RDBMS, mORMot will allow to publish existing SQL statements as services, using e.g. interface-based services - see below - over optimized SynDB.pas data access - see below - as explained in Legacy code and existing projects;
If you need (perhaps not now, but probably in the future) to create some kind of scalable Domain-Driven design architecture, you'll have all needed features at hand with mORMot;
Therefore, mORMot is not just an ORM, nor just a "classic" ORM.
7. Database layer
Adopt a mORMot
7.1. SQLite3-powered, not SQLite3-limited
The core database of this framework uses the SQLite3 library, which is a Free, Secure, Zero-Configuration, Server-less, Single Stable Cross-Platform Database File database engine.
As stated below, you can use any other database access layer, if you wish:
A fast in-memory engine is included, which outperforms any SQL-based solution in terms of speed - but to the price of a non ACID behavior on disk (but ACID in RAM);
An integrated SQLite3 engine, which is the best candidate for an embedded solution, even on server side;
Any remote RDBMS database, via one or more OleDB, ODBC, Zeos or Oracle connections to store your precious ORM objects. Or you can use any DB.pas unit, e.g. to access NexusDB or any database engines supported by DBExpress, FireDAC, AnyDAC, UniDAC (or the deprecated BDE). In all cases, the ORM supports currently SQLite3, Oracle, Jet/MSAccess, MS SQL, Firebird, DB2, PostgreSQL, MySQL, Informix and NexusDB SQL dialects;
SQlite3 will be used as the main SQL engine, able to JOIN all those tables, thanks to its Virtual Table unique feature. You can in fact mix internal and external engines, in the same database model, and access all data in one unique SQL statement.
7.1.1. SQLite3 as core
This framework uses a compiled version of the official SQLite3 library source code, and includes it natively into Delphi code. This framework therefore adds some very useful capabilities to the Standard SQLite3 database engine, but keeping all its advantages, as listed in the previous paragraph of this document:
Can be either statically linked to the executable, or load external sqlite3.dll;
Faster database access, through unified memory model, and usage of the FastMM4 memory manager (which is almost 10 times faster than the default Windows memory manager for memory allocation);
Optional direct encryption of the data on the disk (up to AES256 level, that is Top-Secret security);
Use via mORMot's ORM let database layout be declared once in the Delphi source code (as published properties of classes), avoiding most SQL writing, hence common field or table names mismatch;
Locking of the database at the record level (SQLite3 only handles file-level locking);
Of course, the main enhancement added to the SQLite3 engine is that it can be deployed in a stand-alone or Client-Server architecture, whereas the default SQLite3 library works only in stand-alone mode.
From the technical point of view, here are the current compilation options used for building the SQLite3 engine:
SQLite3 library unit was compiled including RTREE extension for doing very fast range queries;
It can include FTS3/FTS4 full text search engine (MATCH operator), with integrated SQL optimized ranking function;
The framework makes use only of newest API (sqlite3_prepare_v2) and follows latest SQLite3 official documentation;
Additional collations (i.e. sorting functions) were added to handle efficiently not only UTF-8 text, but also e.g. ISO 8601 time encoding, fast Win1252 diacritic-agnostic comparison and native slower but accurate Windows UTF-16 functions;
Additional SQL functions like Soundex for English/French/Spanish phonemes, MOD or CONCAT, and some dedicated functions able to directly search for data within BLOB fields containing an Delphi high-level type (like a serialized dynamic array);
Additional REGEXP operator/function using the Open Source PCRE library to perform regular expression queries in SQL statements;
Custom SQL functions can be defined in Delphi code;
Automatic SQL statement parameter preparation, for execution speed up;
TSQLDatabase can cache the last results for SELECT statements, or use a tuned client-side or server-side per-record caching, in order to speed up most read queries, for lighter web server or client User Interface e.g.;
User authentication handling (SQLite3 is user-free designed);
SQLite3 source code was compiled without thread mutex: the caller has to be thread-safe aware; this is faster on most configurations, since mutex has to be acquired once): low level sqlite3_*() functions are not thread-safe, as TSQLRequest and TSQLBlobStream which just wrap them; but TSQLDataBase is thread-safe, as TSQLTableDB/TSQLRestServerDB/TSQLRestClientDB which call TSQLDataBase;
Compiled with SQLITE_OMIT_SHARED_CACHE define, since with the new Client-Server approach of this framework, no concurrent access could happen, and an internal efficient caching algorithm is added, avoiding most call of the SQLite3 engine in multi-user environment (any AJAX usage should benefit of it);
The embedded SQLite3 database engine can be easily updated from the official SQLite3 source code available at https://sqlite.org - use the amalgamation C file with a few minor changes (documented in the SynSQLite3Static.pas unit) - the resulting C source code delivered as .obj/.o is also available in the official Synopse source code repository.
The overhead of including SQlite3 in your server application will be worth it: just around 1 MB to the executable, but with so many nice features, even if only external databases are used.
7.1.2. Extended by SQLite3 virtual tables
Since the framework is truly object oriented, another database engine could be used instead of the framework. You could easily write your own TSQLRestServer descendant (as an example, we included a fast in-memory database engine as TSQLRestServerFullMemory) and link to a another engine (like FireBird, or a private one). You can even use our framework without any link to the SQLite3 engine itself, via our provided very fast in memory dataset (which can be made persistent by writing and reading JSON files on disk). The SQLite3 engine is implemented in a separate unit, named SynSQLite3.pas, and the main unit of the framework is mORMot.pas. A bridge between the two units is made with mORMotSQLite3.pas, which will found our ORM framework using SQLite3 as its core.
The framework ORM is able to access any database class (internal or external), via the powerful SQLite3 Virtual Table mechanisms - see below. For instance, any external database (via OleDB / ODBC / ZDBC providers or direct Oracle connection) can be accessed via our SynDB.pas-based dedicated units, as stated below.
As a result, the framework has several potential database back-ends, in addition to the default SQLite3 file-based engine. Each engine may have its own purpose, according to the application expectations. Currently SQLite3, Oracle, Jet/MSAccess, MS SQL, Firebird, DB2, PostgreSQL, MySQL, Informix and NexusDB SQL dialects are handled by our ORM.
7.1.3. Data access benchmark
Purpose here is not to say that one library or database is better or faster than another, but publish a snapshot of mORMot persistence layer abilities, depending on each access library.
In this timing, we do not benchmark only the "pure" SQL/DB layer access (SynDB.pas units), but the whole Client-Server ORM of our framework.
Process below includes all aspects of our ORM:
Access via high level CRUD methods (Add/Update/Delete/Retrieve, either per-object or in BATCH mode);
Read and write access of TSQLRecord instances, via optimized RTTI;
JSON marshalling of all values (ready to be transmitted over a network);
REST routing, with security, logging and statistic;
Virtual cross-database layer using its SQLite3 kernel;
SQL on-the-fly generation and translation (in virtual mode);
Access to the database engines via several libraries or providers.
In those tests, we just bypassed the communication layer, since TSQLRestClient and TSQLRestServer are run in-process, in the same thread - as a TSQLRestServerDB instance. So you have here some raw performance testimony of our framework's ORM and RESTful core, and may expect good scaling abilities when running on high-end hardware, over a network.
On a recent notebook computer (Core i7 and SSD drive), depending on the back-end database interfaced, mORMot excels in speed, as will show the following benchmark:
You can persist up to 570,000 objects per second, or retrieve 870,000 objects per second (for our pure Delphi in-memory engine);
When data is retrieved from server or client ORM Cache, you can read more than 900,000 objects per second, whatever the database back-end is;
With a high-performance database like Oracle, and our direct access classes, you can write 70,000 (via array binding) and read 160,000 objects per second, over a 100 MB network;
When using alternate database access libraries (e.g. Zeos, or DB.pas based classes), speed is lower (even if comparable for DB2, MS SQL, PostgreSQL, MySQL) but still enough for most work, due to some optimizations in the mORMot code (e.g. caching of prepared statements, SQL multi-values insertion, direct export to/from JSON, SQlite3 virtual mode design, avoid most temporary memory allocation...).
Difficult to find a faster ORM, I suspect.
7.1.3.1. Software and hardware configuration
The following tables try to sum up all available possibilities, and give some benchmark (average objects/second for writing or reading).
In these tables:
'SQLite3 (file full/off/exc)' indicates use of the internal SQLite3 engine, with or without Synchronous := smOff and/or DB.LockingMode := lmExclusive - see below;
'SQLite3 (mem)' stands for the internal SQLite3 engine running in memory;
'SQLite3 (ext ...)' is about access to a SQLite3 engine as external database - see below, either as file or memory;
'TObjectList' indicates a TSQLRestStorageInMemory instance - see below - either static (with no SQL support) or virtual (i.e. SQL featured via SQLite3 virtual table mechanism) which may persist the data on disk as JSON or compressed binary;
'WinHTTP SQLite3' and 'Sockets SQLite3' stands for a SQLite3 engine published over HTTP using our SynDBRemote.pas unit using the WinHTTP API or plain sockets on the client side - see below, then accessed as an external database by our ORM;
'NexusDB' is the free embedded edition, available from official site;
'Jet' stands for a Jet/MSAccess database engine, accessed via OleDB.
'Oracle' shows the results of our direct OCI access layer (SynDBOracle.pas);
'Zeos *' indicates that the database was accessed directly via the ZDBC layer;
'FireDAC *' stands for FireDAC library;
'UniDAC *' stands for UniDAC library;
'BDE *' when using a BDE connection;
'ODBC *' for a direct access to ODBC;
'MongoDB ack/no ack' for direct MongoDB access (SynMongoDB.pas) with or without write acknowledge.
This list of database providers is to be extended in the future. Any feedback is welcome!
Numbers are expressed in rows/second (or objects/second). This benchmark was compiled with Delphi XE4, since newer compilers tends to give better results, mainly thanks to function in-lining (which was not existing e.g. in Delphi 6-7).
Note that these tests are not about the relative speed of each database engine, but reflect the current status of the integration of several DB libraries within the mORMot database access.
Benchmark was run on a Core i7 notebook, running Windows 7, with a standard SSD, including anti-virus and background applications:
Linked to a shared Oracle 11.2.0.1 database over 100 Mb Ethernet;
MS SQL Express 2008 R2 running locally in 64-bit mode;
IBM DB2 Express-C edition 10.5 running locally in 64-bit mode;
PostgreSQL 9.2.7 running locally in 64-bit mode;
MySQL 5.6.16 running locally in 64-bit mode;
Firebird embedded in revision 2.5.2;
NexusDB 3.11 in Free Embedded Version;
MongoDB 2.6 in 64-bit mode.
So it was a development environment, very similar to low-cost production site, not dedicated to give best performance. During the process, CPU was noticeable used only for SQLite3 in-memory and TObjectList - most of the time, the bottleneck is not the CPU, but the storage or network. As a result, rates and timing may vary depending on network and server load, but you get results similar to what could be expected on customer side, with an average hardware configuration. When using high-head servers and storage, running on a tuned Linux configuration, you can expect even better numbers.
Tests were compiled with the Delphi XE4 32-bit mode target platform. Most of the tests do pass when compiled as a 64-bit executable, with the exception of some providers (like Jet), not available on this platform. Speed results are almost the same, only slightly slower; so we won't show them here.
You can compile the "15 - External DB performance" supplied sample code, and run the very same benchmark on your own configuration. Feedback is welcome!
From our tests, the UniDAC version we were using had huge stability issues when used with DB2: the tests did not pass, and the DB2 server just hang processing the queries, whereas there was no problem with other libraries. It may have been fixed since, but you won't find any "UniDAC DB2" results in the benchmark below in the meanwhile.
7.1.3.2. Insertion speed
Here we insert 5,000 rows of data, with diverse scenarios:
'Direct' stands for a individual Client.Add() insertion;
'Trans' indicates that all insertion is nested within a transaction - which makes a great difference, e.g. with a SQlite3 database.
Here are some insertion speed values, in objects/second:
Direct
Batch
Trans
Batch Trans
SQLite3 (file full)
462
28123
84823
181455
SQLite3 (file off)
2102
83093
88006
202667
SQLite3 (file off exc)
28847
193453
89451
207615
SQLite3 (mem)
89456
236540
104249
239165
TObjectList (static)
314465
543892
326370
542652
TObjectList (virtual)
325393
545672
298846
545018
SQLite3 (ext full)
424
14523
102049
164636
SQLite3 (ext off)
2245
47961
109706
189250
SQLite3 (ext off exc)
41589
180759
108481
192071
SQLite3 (ext mem)
101440
211389
113530
209713
WinHTTP SQLite3
2165
36464
2079
38478
Sockets SQLite3
8118
75251
8553
80550
MongoDB (ack)
10081
84585
9800
85232
MongoDB (no ack)
33223
273672
34665
274393
ODBC SQLite3
492
11746
35367
82425
ZEOS SQlite3
494
11851
56206
85705
FireDAC SQlite3
20605
38853
40042
113752
UniDAC SQlite3
477
8725
26552
38756
ODBC Firebird
1495
18056
13485
17731
ZEOS Firebird
10452
62851
22003
63708
FireDAC Firebird
18147
46877
18922
46353
UniDAC Firebird
5986
14809
6522
14948
Jet
4235
4424
4954
5094
NexusDB
5998
15494
7687
18619
Oracle
226
56112
1133
52367
ZEOS Oracle
210
32725
1027
31982
ODBC Oracle
236
1664
1515
7709
FireDAC Oracle
118
48575
1519
12566
UniDAC Oracle
164
5701
1215
2884
BDE Oracle
489
927
839
1022
MSSQL local
5246
54360
12988
62453
ODBC MSSQL
4911
18652
11541
20976
FireDAC MSSQL
5016
7341
11686
51242
UniDAC MSSQL
4392
29768
8649
33464
ODBC DB2
4792
48387
14085
70104
FireDAC DB2
4452
48635
11014
52781
ZEOS PostgreSQL
4196
31409
9689
41225
ODBC PostgreSQL
4068
26262
5130
30435
FireDAC PostgreSQL
4181
26635
10111
36483
UniDAC PostgreSQL
2705
18563
4442
28337
ODBC MySQL
3160
38309
10856
47630
ZEOS MySQL
3426
34037
12217
40186
FireDAC MySQL
3078
43053
10955
45781
UniDAC MySQL
3119
27772
11246
33288
Due to its ACID implementation, SQLite3 process on file waits for the hard-disk to have finished flushing its data, therefore it is the reason why it is slower than other engines at individual row insertion (less than 10 objects per second with a mechanical hard drive instead of a SDD) outside the scope of a transaction.
So if you want to reach the best writing performance in your application with the default engine, you should better use transactions and regroup all writing into services or a BATCH process. Another possibility could be to execute DB.Synchronous := smOff and/or DB.LockingMode := lmExclusive at SQLite3 engine level before process: in case of power loss at wrong time it may corrupt the database file, but it will increase the rate by a factor of 50 (with hard drive), as stated by the "off" and "off exc" rows of the table - see below. Note that by default, the FireDAC library set both options, so results above are to be compared with "SQLite3 off exc" rows. In SQLite3 direct mode, BATCH process benefits of multi-INSERT statements (just llike external databases): it explain why BatchAdd() is faster than plain Add(), even in the slowest and safest "file full" mode.
For our direct Oracle access SynDBOracle.pas unit, and for SynDBZeos.pas or SynDBFireDAC.pas (known as Array DML in FireDAC/AnyDAC) libraries, BATCH process benefits of the array binding feature a lot.
For most engines, our ORM kernel is able to generate the appropriate SQL statement for speeding up bulk insertion. For instance:
SQlite3, MySQL, PostgreSQL, MSSQL 2008, DB2, MySQL or NexusDB handle INSERT statements with multiple INSERT INTO .. VALUES (..),(..),(..)..;
Oracle handles INSERT INTO .. INTO .. SELECT 1 FROM DUAL (weird syntax, isn't it?);
Firebird implements EXECUTE BLOCK.
As a result, some engines show a nice speed boost when BatchAdd() is used. Even SQLite3 is faster when used as external engine, in respect to direct execution! This feature is at ORM/SQL level, so it benefits to any external database library. Of course, if a given library has a better implementation pattern (e.g. our direct Oracle, Zeos or FireDAC with native array binding), it is used instead.
MongoDB bulk insertion has been implemented, which shows an amazing speed increase in Batch mode. Depending on the MongoDBwrite concern mode, insertion speed can be very high: by default, every write process will be acknowledge by the server, but you can by-pass this request if you set the wcUnacknowledged mode - note that in this case, any error (e.g. an unique field duplicated value) will never be notified, so it should not be used in production, unless you need this feature to quickly populate a database, or consolidate some data as fast as possible.
7.1.3.3. Reading speed
Now the same data is retrieved via the ORM layer:
'By one' states that one object is read per call (ORM generates a SELECT * FROM table WHERE ID=? for Client.Retrieve() method);
'All *' is when all 5000 objects are read in a single call (i.e. running SELECT * FROM table from a FillPrepare() method call), either forced to use the virtual table layer, or with direct static call.
Here are some reading speed values, in objects/second:
By one
All Virtual
All Direct
SQLite3 (file full)
127284
558721
550842
SQLite3 (file off)
126896
549450
526149
SQLite3 (file off exc)
128077
557537
535905
SQLite3 (mem)
127106
557537
563316
TObjectList (static)
300012
912408
913742
TObjectList (virtual)
303287
402706
866551
SQLite3 (ext full)
135380
267436
553158
SQLite3 (ext off)
133696
262977
543065
SQLite3 (ext off exc)
134698
264186
558596
SQLite3 (ext mem)
137487
259713
557475
WinHTTP SQLite3
2198
209231
340460
Sockets SQLite3
8524
210260
387687
MongoDB (ack)
8002
262353
271268
MongoDB (no ack)
8234
272079
274582
ODBC SQLite3
19461
136600
201280
ZEOS SQlite3
33541
200835
306955
FireDAC SQlite3
7683
83532
112470
UniDAC SQlite3
2522
74030
96420
ODBC Firebird
3446
69607
97585
ZEOS Firebird
20296
114676
117210
FireDAC Firebird
2376
46276
56269
UniDAC Firebird
2189
66886
88102
Jet
2640
166112
258277
NexusDB
1413
120845
208246
Oracle
1558
120977
159861
ZEOS Oracle
1420
110367
137982
ODBC Oracle
1620
43441
45764
FireDAC Oracle
1231
42149
54795
UniDAC Oracle
688
27083
30093
BDE Oracle
860
3870
4036
MSSQL local
10135
210837
437905
ODBC MSSQL
12458
147544
256502
FireDAC MSSQL
3776
72123
94091
UniDAC MSSQL
2505
93231
135932
ODBC DB2
7649
84880
124486
FireDAC DB2
3155
71456
88264
ZEOS PostgreSQL
8833
158760
223583
ODBC PostgreSQL
10361
85680
120913
FireDAC PostgreSQL
2261
58252
79002
UniDAC PostgreSQL
864
86900
122856
ODBC MySQL
10143
65538
82447
ZEOS MySQL
2052
171803
245772
FireDAC MySQL
3636
75081
105028
UniDAC MySQL
4798
99940
146968
The SQLite3 layer gives amazing reading results, which makes it a perfect fit for most typical ORM use. When running with DB.LockingMode := lmExclusive defined (i.e. "off exc" rows), reading speed is very high, and benefits from exclusive access to the database file - see below. External database access is only required when data is expected to be shared with other processes, or for better scaling: e.g. for physical n-Tier installation with dedicated database server(s).
In the above table, it appears that all libraries based on DB.pas are slower than the others for reading speed. In fact, TDataSet sounds to be a real bottleneck, due to its internal data marshalling. Even FireDAC, which is known to be very optimized for speed, is limited by the TDataSet structure. Our direct classes, or even ZEOS/ZDBC performs better, since they are able to output JSON content with no additional marshalling, via a dedicated ColumnsToJSON() method.
For both writing and reading, TObjectList / TSQLRestStorageInMemory engine gives impressive results, but has the weakness of being in-memory, so it is not ACID by design, and the data has to fit in memory. Note that indexes are available for IDs and stored AS_UNIQUE properties.
As a consequence, search of non-unique values may be slow: the engine has to loop through all rows of data. But for unique values (defined as stored AS_UNIQUE), both insertion and search speed is awesome, due to its optimized O(1) hash algorithm - see the following benchmark, especially the "By name" row for "TObjectList" columns, which correspond to a search of an unique RawUTF8 property value via this hashing method.
SQLite3 (file full)
SQLite3 (file off)
SQLite3 (mem)
TObjectList (static)
TObjectList (virt.)
SQLite3 (ext file full)
SQLite3 (ext file off)
SQLite3 (ext mem)
Oracle
Jet
By one
10461
10549
44737
103577
103553
43367
44099
45220
901
1074
By name
9694
9651
32350
70534
60153
22785
22240
23055
889
1071
All Virt.
167095
162956
168651
253292
118203
97083
90592
94688
56639
52764
All Direct
167123
144250
168577
254284
256383
170794
165601
168856
88342
75999
Above table results were run on a Core 2 duo laptop, so numbers are lower than with the previous tables.
During the tests, internal caching - see below and ORM Cache - was disabled, so you may expect speed enhancements for real applications, when data is more read than written: for instance, when an object is retrieved from the cache, you achieve more than 1,00,000 read requests per second, whatever database is used.
7.1.3.4. Analysis and use case proposal
When declared as virtual table (via a VirtualTableRegister call), you have the full power of SQL (including JOINs) at hand, with incredibly fast CRUD operations: 100,000 requests per second for objects read and write, including serialization and Client-Server communication!
Some providers are first-class citizens to mORMot, like SQLite3, Oracle, MS SQL, PostgreSQL, MySQL or IBM DB2. You can connect to them without the bottleneck of the DB.pas unit, nor any restriction of your Delphi license (a Starter edition is enough).
First of all, SQLite3 is still to be considered, even for a production server. Thanks to mORMot's architecture and design, this "embedded" database could be used as main database engine for a client-server application with heavy concurrent access - if you have doubts about its scaling abilities, see below. Here, "embedded" is not restricted to "mobile", but sounds like a self-contained, zero-configuration proven engine.
The remote access via HTTP gives pretty good results, and in this local benchmark, plain socket client (i.e. TSQLDBSocketConnectionProperties class) gives better results that the WinHTTP API (using TSQLDBWinHTTPConnectionProperties on the client side). But in real use, e.g. over the Internet, the WinHTTP API has been reported as more stable, so may be preferred on production. With a SQlite3 backend, this offers pretty good performance, and the benefit of using standard HTTP for its transport.
Most recognized closed source databases are available:
Direct access to Oracle gives impressive results in BATCH mode (aka array binding). It may be an obligation if your end-customer stores already its data in such a server, for instance, and want to leverage the licensing cost of its own IT solution. Oracle Express edition is free, but somewhat heavy and limited in terms of data/hardware size (see its licensing terms);
MS SQL Server, directly accessed via OleDB (or ODBC) gives pretty good timing. A MS SQL Server 2008 R2 Express instance is pretty well integrated with the Windows environment, for a very affordable price (i.e. for free) - the LocalDB (MSI installer) edition is enough to start with, but also with data/hardware size limitation, just like Oracle Express;
IBM DB2 is another good candidate, and the Express-C ("C" standing for Community) offers a no-charge opportunity to run an industry standard engine, with no restriction on the data size, and somewhat high hardware limitations (16 GB of RAM and 2 CPU cores for the latest 10.5 release) or enterprise-level features;
We did not include Informix numbers here, since support for this database was provided by an user patch - thanks Esteban Martin for sharing! - and we do not have any such server available here;
NexusDB may be considered, if you have existing Delphi code and data - but it is less known and recognized as the its commercial competitors.
Open Source databases are worth considering, especially in conjunction with an Open Source framework like mORMot:
MySQL is the well-known engine used by a lot of web sites, mainly with LAMP (Linux Apache MySQL PHP) configurations. Windows is not the best platform to run it, but it could be a fairly good candidate, especially in its MariaDB fork, which sounds more attractive those days than the official main version, owned by Oracle;
PostgreSQL is an Enterprise class database, with amazing features among its Open Source alternatives, and really competes with commercial solutions. Even under Windows, we think it is easy to install and administrate, and uses less resource than the other commercial engines.
Firebird gave pretty consistent timing, when accessed via Zeos/ZDBC. We show here the embedded version, but the server edition is worth considering, since a lot of Delphi programmers are skilled with this free alternative to Interbase;
MongoDB appears as a serious competitor to SQL databases, with the potential benefit of horizontal scaling and installation/administration ease - performance is very high, and its document-based storage fits perfectly with mORMot's advanced ORM features like Shared nothing architecture (or sharding).
To access those databases, OleDB, ODBC or ZDBC providers may also be used, with direct access. mORMot is a very open-minded rodent: you can use any DB.pas provider, e.g. FireDAC, UniDAC, DBExpress, NexusDB or even the BDE, but with the additional layer introduced by using a TDataSet instance, at reading.
Therefore, the typical use may be the following:
Database
Use case
internal SQLite3 file
Created by default. General safe data handling, with amazing speed in "off exc" mode
internal SQLite3 in-memory
Created with ':memory:' file name. Fast data handling with no persistence (e.g. for testing or temporary storage)
TObjectList static
Created with StaticDataCreate. Best possible performance for small amount of data, without ACID nor SQL
TObjectList virtual
Created with VirtualTableRegister. Best possible performance for SQL over small amount of data (or even unlimited amount under Win64), if ACID is not required nor complex SQL
external Oracle / MS SQL / DB2 / PostgreSQL / MySQL / Informix / Firebird
Created with VirtualTableExternalRegister Fast, secure and industry standard back-ends; data can be shared outside mORMot
external NexusDB
Created with VirtualTableExternalRegister The free embedded version let the whole engine be included within your executable, and use any existing code, but SQlite3 sounds like a better option
external Jet/MSAccess
Created with VirtualTableExternalRegister Could be used as a data exchange format (e.g. with Office applications)
external Zeos
Created with VirtualTableExternalRegister Allow access to several external engines, with direct Zeos/ZDBC access which will by-pass the DB.pas unit and its TDataSet bottleneck - and we will also prefer an active Open Source project!
external FireDAC/UniDAC
Created with VirtualTableExternalRegister Allow access to several external engines, including the DB.pas unit and its TDataSet bottleneck
external MongoDB
Created with StaticMongoDBRegister() High-speed document-based storage, with horizontal scaling and advanced query abilities of nested sub-documents
Whatever database back-end is used, don't forget that mORMot design will allow you to switch from one library to another, just by changing a TSQLDBConnectionProperties class type. And note that you can mix external engines, on purpose: you are not tied to one single engine, but the database access can be tuned for each ORM table, according to your project needs.
7.2. SQLite3 implementation
Beginning with the revision 1.15 of the framework, the SQLite3 engine itself has been separated from our mORMotSQLite3.pas unit, and defined as a stand-alone unit named SynSQLite3.pas. See SDD # DI-2.2.1.
It can be used therefore:
Either stand-alone with direct access of all its features, even using its lowest-level C API, via SynSQLite3.pas - but you won't be able to switch to another database engine easily;
Or stand-alone with high-level SQL access, using our SynDB.pas generic access classes, via SynDBSQLite3.pas - so you will be able to change to any other database engine (e.g. MS SQL, PostgreSQL, MySQL or Oracle) when needed;
Or Client-Server based access with all our ORM features - see mORMotSQLite3.pas.
We'll define here some highlights specific to our own implementation of the SQLite3 engine, and let you consult the official documentation of this great Open Source project at http://sqlite.org for general information about its common features.
7.2.1. Statically linked or using external dll
Since revision 1.18 of the framework, our SynSQlite3.pas unit is able to access the SQLite3 engine in two ways:
Either statically linked within the project executable;
Or from an external sqlite3.dll library file.
The SQLite3 APIs and constants are defined in SynSQlite3.pas, and accessible via a TSQLite3Library class definition. It defines a global sqlite3 variable as such:
To use the SQLite3 engine, an instance of TSQLite3Library class shall be assigned to this global variable. Then all mORMot's calls will be made through it, calling e.g. sqlite3.open() instead of sqlite3_open().
Referring to SynSQLite3Static.pas in the uses clause of your project is enough to link the .obj/.o engine into your executable.
Warning - breaking change: before version 1.18 of the framework, link of static .obj was forced - so you must now add a reference to SynSQLite3Static in your project uses clause to work as expected.
In order to use an external sqlite3.dll library, you have to set the global sqlite3 variable as such:
FreeAndNil(sqlite3); // release any previous instance (e.g. static)sqlite3 := TSQLite3LibraryDynamic.Create;
Of course, FreeAndNil(sqlite3) is not mandatory, and should be necessary only to avoid any memory leak if another SQLite3 engine instance was allocated (may be the case if SynSQLite3Static is referred somewhere in your project's units).
Here are some benchmarks, compiled with Delphi XE3, run in a 32-bit project, using either the static bcc-compiled engine, or an external sqlite3.dll, compiled via MinGW or Visual C++.
7.2.1.1. Static bcc-compiled .obj
First of all, our version included with SynSQLite3Static.pas unit, is to be benchmarked.
Writing speed
Direct
Batch
Trans
Batch Trans
SQLite3 (file full)
477
389
97633
122865
SQLite3 (file off)
868
869
96827
125862
SQLite3 (mem)
84642
108624
104947
135105
TObjectList (static)
338478
575373
337336
572147
TObjectList (virtual)
338180
554446
331873
575837
SQLite3 (ext full)
486
496
101419
7011
SQLite3 (ext off)
799
303
105402
135109
SQLite3 (ext mem)
93893
129550
109027
152811
Reading speed
By one
All Virtual
All Direct
SQLite3 (file full)
26924
494559
500200
SQLite3 (file off)
27750
496919
502714
SQLite3 (mem)
124402
444404
495392
TObjectList (static)
332778
907605
910249
TObjectList (virtual)
331038
404891
905961
SQLite3 (ext full)
102707
261547
521322
SQLite3 (ext off)
131130
255806
513505
SQLite3 (ext mem)
135784
248780
502664
Good old Borland C++ builder produces some efficient code here. Those numbers are very good, when compared to the other two options. Probably, using FastMM4 as memory manager and tuned compilation options does make sense.
7.2.1.2. Official MinGW-compiled sqlite3.dll
Here we used the official sqlite3.dll library, as published in the http://sqlite.org web site, and compiled with the MinGW/GCC compiler.
Writing speed
Direct
Batch
Trans
Batch Trans
SQLite3 (file full)
418
503
86322
119420
SQLite3 (file off)
918
873
93196
127317
SQLite3 (mem)
83108
106951
99892
138003
TObjectList (static)
320204
573723
324696
547465
TObjectList (virtual)
323247
563697
324443
564716
SQLite3 (ext full)
501
410
100152
133679
SQLite3 (ext off)
913
438
102806
135545
SQLite3 (ext mem)
96028
122798
108363
150920
Reading speed
By one
All Virtual
All Direct
SQLite3 (file full)
26883
473529
438904
SQLite3 (file off)
27729
472188
451304
SQLite3 (mem)
116550
459432
457959
TObjectList (static)
318248
891265
905469
TObjectList (virtual)
327739
359040
892697
SQLite3 (ext full)
127346
180812
370288
SQLite3 (ext off)
127749
227759
438096
SQLite3 (ext mem)
129792
224386
436338
7.2.1.3. Visual C++ compiled sqlite3.dll
The Open Source wxsqlite project provides a sqlite3.dll library, compiled with Visual C++, and including RC4 and AES 128/256 encryption (better than the basic encryption implemented in SynSQLite3Static.pas) - not available in the official library.
Under Windows, the Visual C++ compiler gives very good results. It is a bit faster than the other two, despite a somewhat less efficient virtual table process.
As a conclusion, our SynSQLite3Static.pas statically linked implementation sounds like the best overall approach for Windows 32-bit: best speed for virtual tables (which is the core of our ORM), and no dll hell. No library to deploy and copy, everything is embedded in the project executable, ready to run as expected. External sqlite3.dll will be used for cross-platform support, and when targeting 64-bit Windows applications.
7.2.2. Prepared statement
In order to speed up the time spent in the SQLite3 engine (it may be useful for high-end servers), the framework is able to natively handle prepared SQL statements.
Starting with version 1.12 of the framework, we added an internal SQL statement cache in the database access, available for all SQL request. Previously, only the one-record SQL SELECT * FROM ... WHERE RowID=... was prepared (used e.g. for the TSQLRest. Retrieve method).
That is, if a previous SQL statement is run with some given parameters, a prepared version, available in cache, is used, and new parameters are bounded to it before the execution by SQLite3.
In some cases, it can speed the SQLite3 process a lot. From our profiling, prepared statements make common requests (i.e. select / insert / update on one row) at least two times faster, on an in-memory database (':memory:' specified as file name).
In order to use this statement caching, any SQL statements must have the parameters to be surrounded with ':(' and '):'. The SQL format was indeed enhanced by adding an optional way of marking parameters inside the SQL request, to enforce statement preparing and caching.
Therefore, there are now two ways of writing the same SQL request:
Write the SQL statement as usual:
SELECT * FROM TABLE WHERE ID=10;
in this case, the SQL will be parsed by the SQLite3 engine, a statement will be compiled, then run.
Use the new optional markers to identify the changing parameter:
SELECT * FROM TABLE WHERE ID=:(10):;
in this case, any matching already prepared statement will be re-used for direct run.
In the later case, an internal pool of prepared TSQLRequest statements is maintained. The generic SQL code used for the matching will be this one:
SELECT * FROM TABLE WHERE ID=?;
and the integer value 10 will be bounded to the prepared statement before execution.
Example of possible inlined values are (note double " quotes are allowed for the text parameters, whereas SQL statement should only use single ' quotes):
All internal SQL statement generated by the ORM are now using this new parameter syntax.
For instance, here is how an object deletion is implemented for the SQlite3 engine:
functionTSQLRestServerDB.EngineDelete(Table: TSQLRecordClass; ID: TID): boolean;
beginif Assigned(OnUpdateEvent) then
OnUpdateEvent(self,seDelete,Table,ID); // notify BEFORE deletion
result := ExecuteFmt('DELETE FROM % WHERE RowID=:(%):;',[Table.SQLTableName,ID]);
end;
Using :(%): will let the DELETE FROM table_name WHERE RowID=? statement be prepared and reused between calls.
In your code, you should better use, for instance:
I found out that this SQL format enhancement is much easier to use (and faster) in the Delphi code than using parameters by name or by index, like in this classic VCL code:
SQL.Text := 'SELECT Name FROM Table WHERE ID=:Index';
SQL.ParamByName('Index').AsInteger := aID;
At a lowest-level, in-lining the bounds values inside the statement enabled better serialization in a Client-Server architecture, and made caching easier on the Server side: the whole SQL query contains all parameters within one unique RawUTF8 value, and can be therefore directly compared to the cached entries. As such, our framework is able to handle prepared statements without keeping bound parameters separated from the main SQL text.
It is also worth noting that external databases (see next paragraph) will also benefit from this statement preparation. Inlined values will be bound separately to the external SQL statement, to achieve the best speed possible.
7.2.3. R-Tree inclusion
Since the 2010-06-25 source code repository update, the RTREE extension is now compiled by default within all supplied .obj files.
An R-Tree is a special index that is designed for doing range queries. R-Trees are most commonly used in geospatial systems where each entry is a rectangle with minimum and maximum X and Y coordinates. Given a query rectangle, an R-Tree is able to quickly find all entries that are contained within the query rectangle or which overlap the query rectangle. This idea is easily extended to three dimensions for use in CAD systems. R-Trees also find use in time-domain range look-ups. For example, suppose a database records the starting and ending times for a large number of events. A R-Tree is able to quickly find all events, for example, that were active at any time during a given time interval, or all events that started during a particular time interval, or all events that both started and ended within a given time interval. And so forth. See http://www.sqlite.org/rtree.html
Any record which inherits from this TSQLRecordRTree class must have only sftFloat (i.e. Delphidouble) published properties grouped by pairs, each as minimum- and maximum-value, up to 5 dimensions (i.e. 11 columns, including the ID property). Its ID: TID property must be set before adding a TSQLRecordRTree to the database, e.g. to link an R-Tree representation to a regular TSQLRecord table containing the main data.
Queries against the ID or the coordinate ranges are almost immediate: so you can e.g. extract some coordinates box from the main regular TSQLRecord table, then use a TSQLRecordRTree-joined query to make the process faster; this is exactly what the TSQLRestClient. RTreeMatch method offers: for instance, running with aMapData. BlobField filled with [-81,-79.6,35,36.2] the following lines:
SELECT MapData.ID From MapData, MapBox WHERE MapData.ID=MapBox.ID
AND minX>=:(-81.0): AND maxX<=:(-79.6): AND minY>=:(35.0): AND :(maxY<=36.2):
AND MapBox_in(MapData.BlobField,:('\uFFF0base64encoded-81,-79.6,35,36.2'):);
The MapBox_inSQL function is registered in TSQLRestServerDB. Create constructor for all TSQLRecordRTree classes of the current database model. Both BlobToCoord and ContainedIn class methods are used to handle the box storage in the BLOB. By default, it will process a raw array of double, with a default box match (that is ContainedIn method will match the simple minX>=...maxY<=... where clause).
7.2.4. FTS3/FTS4/FTS5
FTS3/FTS4/FTS5 are SQLite3virtual table modules that allow users to perform full-text searches on a set of documents. The most common (and effective) way to describe full-text searches is "what Google, Yahoo and Altavista do with documents placed on the World Wide Web". Users input a term, or series of terms, perhaps connected by a binary operator or grouped together into a phrase, and the full-text query system finds the set of documents that best matches those terms considering the operators and groupings the user has specified.
See http://www.sqlite.org/fts3.html as reference material about FTS3/FTS4 usage in SQLite3, and https://www.sqlite.org/fts5.html about FTS5. In short, FTS5 is a new version of FTS4 that includes various fixes and solutions for problems that could not be fixed in FTS4 without sacrificing backwards compatibility.
Since recent versions of the framework, the sqlite3.obj/.o static file available with the distribution includes the FTS3/FTS4/FTS5 engines (also on other platforms with FPC).
7.2.4.1. Dedicated FTS3/FTS4/FTS5 record type
In order to allow easy use of the FTS feature, some types have been defined:
TSQLRecordFTS3 to create a FTS3 table with default "simple" stemming;
TSQLRecordFTS3Porter to create a FTS3 table using the Porter Stemming algorithm (see below);
TSQLRecordFTS3Unicode61 to create a FTS3 table using the Unicode61 Stemming algorithm (see below);
TSQLRecordFTS4 to create a FTS4 table with default "simple" stemming;
TSQLRecordFTS4Porter to create a FTS4 table using the Porter Stemming algorithm;
The "stemming" algorithm - see http://sqlite.org/fts3.html#tokenizer - is the way the english text is parsed for creating the word index from raw text.
The simple (default) tokenizer extracts tokens from a document or basic FTS full-text query according to the following rules:
A term is a contiguous sequence of eligible characters, where eligible characters are all alphanumeric characters, the "_" character, and all characters with UTF code-points greater than or equal to 128. All other characters are discarded when splitting a document into terms. Their only contribution is to separate adjacent terms.
All uppercase characters within the ASCII range (UTF code-points less than 128), are transformed to their lowercase equivalents as part of the tokenization process. Thus, full-text queries are case-insensitive when using the simple tokenizer.
For example, when a document containing the text "Right now, they're very frustrated.", the terms extracted from the document and added to the full-text index are, in order, "right now they re very frustrated". Such a document will match a full-text query such as "MATCH 'Frustrated'", as the simple tokenizer transforms the term in the query to lowercase before searching the full-text index.
The Porter Stemming algorithm tokenizer uses the same rules to separate the input document into terms, but as well as folding all terms to lower case it uses the Porter Stemming algorithm to reduce related English language words to a common root. For example, using the same input document as in the paragraph above, the porter tokenizer extracts the following tokens: "right now thei veri frustrat". Even though some of these terms are not even English words, in some cases using them to build the full-text index is more useful than the more intelligible output produced by the simple tokenizer. Using the porter tokenizer, the document not only matches full-text queries such as "MATCH 'Frustrated'", but also queries such as "MATCH 'Frustration'", as the term "Frustration" is reduced by the Porter stemmer algorithm to "frustrat" - just as "Frustrated" is. So, when using the porter tokenizer, FTS is able to find not just exact matches for queried terms, but matches against similar English language terms. For more information on the Porter Stemmer algorithm, please refer to the http://tartarus.org/~martin/PorterStemmer page.
The Unicode61 Stemming algorithm tokenizer works very much like "simple" except that it does simple unicode case folding according to rules in Unicode Version 6.1 and it recognizes unicode space and punctuation characters and uses those to separate tokens. By default, "Unicode61" also removes all diacritics from Latin script characters.
7.2.4.3. FTS searches
A good approach is to store your data in a regular TSQLRecord table, then store your text content in a separated FTS table, associated to this TSQLRecordFTS5 table via its ID / DocID property. Note that for TSQLRecordFTS* types, the ID property was renamed as DocID, which is the internal name for the FTS virtual table definition of its unique integer key ID property.
For example (extracted from the regression test code), you can define this new class:
Note that FTS tables must only content UTF-8 text field, that is RawUTF8 (under Delphi 2009 and up, you could also use the Unicode string type, which is mapped as a UTF-8 text field for the SQLite3 engine).
Then you can add some Body/Subject content to this FTS table, just like any regular TSQLRecord content, via the ORM feature of the framework:
FTS := TSQLFTSTest.Create;
try
Check(aClient.TransactionBegin(TSQLFTSTest)); // MUCH faster with thisfor i := StartID to StartID+COUNT-1 dobeginFTS.DocID := IntArray[i];
FTS.Subject := aClient.OneFieldValue(TSQLRecordPeople,'FirstName',FTS.DocID);
FTS.Body := FTS.Subject+' bodY'+IntToStr(FTS.DocID);
aClient.Add(FTS,true);
end;
aClient.Commit; // Commit must be BEFORE OptimizeFTS3, memory leak otherwise
Check(FTS.OptimizeFTS3Index(Client.fServer));
The steps above are just typical. The only difference with a "standard" ORM approach is that the DocID property must be set before adding the TSQLRecordFTS5 instance: there is no ID automatically created by SQLite, but an ID must be specified in order to link the FTS record to the original TSQLRecordPeople row, from its ID.
To support full-text queries, FTS maintains an inverted index that maps from each unique term or word that appears in the dataset to the locations in which it appears within the table contents. The dedicated OptimizeFTS3Index method is called to merge all existing index b-trees into a single large b-tree containing the entire index - this method will work with FTS3, FTS4 and FTS5 classes, whatever its name states. This can be an expensive operation, but may speed up future queries: you should not call this method after every modification of the FTS tables, but after some text has been added.
Then the FTS search query will use the custom FTSMatch method:
Check(aClient.FTSMatch(TSQLFTSTest,'Subject MATCH ''salVador1''',IntResult));
The matching IDs are stored in the IntResult integer dynamic array. Note that you can use a regular SQL query instead. Use of the FTSMatch method is not mandatory: in fact, it is just a wrapper around the OneFieldValues method, just using the "neutral" RowID column name for the results:
functionTSQLRest.FTSMatch(Table: TSQLRecordFTS3Class;
const WhereClause: RawUTF8; var DocID: TIntegerDynArray): boolean;
begin// FTS3 tables do not have any ID, but RowID or DocID
result := OneFieldValues(Table,'RowID',WhereClause,DocID);
end;
An overloaded FTSMatch method has been defined, and will handle detailed matching information, able to use a ranking algorithm. With this method, the results will be sorted by relevance:
This method expects some additional constant parameters for weighting each FTS table column (there must be the same number of PerFieldWeight parameters as there are columns in the TSQLRecordFTS5 table). In the above sample code, the Subject field will have a weight of 1.0, and he Body will be weighted as 0.5, i.e. any match in the 'body' column content will be ranked twice less than any match in the 'subject', which is probably of higher density.
The above query will call the following SQL statement:
SELECT RowID FROM FTSTest WHERE FTSTest MATCH 'body1*'
ORDER BY rank(matchinfo(FTSTest),1.0,0.5) DESC
The rank internal SQL function has been implemented in Delphi, following the guidelines of the official SQLite3 documentation - as available from their Internet web site at http://www.sqlite.org/fts3.html#appendix_a - to implement the most efficient way of implementing ranking. It will return the RowID of documents that match the full-text query sorted from most to least relevant. When calculating relevance, query term instances in the 'subject' column are given twice the weighting of those in the 'body' column.
7.2.4.4. FTS4 index tables without content
Just as SQlite3 allows, the framework permits FTS4 to forego storing the text being indexed, letting the indexed documents be stored in a database table created and managed by the user (an "external content" FTS4 table).
Because the indexed documents themselves are usually much larger than the full-text index, this option can be used to achieve significant storage space savings. Contentless FTS4 tables still support SELECT statements. However, it is an error to attempt to retrieve the value of any table column other than the docid column. The auxiliary function matchinfo() may be used - so TSQLRest.FTSMatch method will work as expected, but snippet() and offsets() will cause an exception at execution.
For instance, in sample "30 - MVC Server", we define those two tables:
And we initialized the database model to let all data be stored only in TSQLArticle, not in TSQLArticleSearch, using an "external content" FTS4 table to index the text from the selected Title, Abstract and Content fields of TSQLArticle:
function CreateModel: TSQLModel;
begin
result := TSQLModel.Create([TSQLBlogInfo,TSQLAuthor,
TSQLTag,TSQLArticle,TSQLComment,TSQLArticleSearch],'blog');
result.Props[TSQLArticleSearch].FTS4WithoutContent(TSQLArticle);
...
The TSQLModelRecordProperties.FTS4WithoutContent() will in fact create the needed SQLite3 triggers, to automatically populate the ArticleSearch Full Text indexes when the main Article row changes.
Since this FTS4 feature is specific to SQlite3, and triggers do not work on virtual tables (by now), this method won't do anything if the TSQLArticleSearch or TSQLArticle are on an external database - see below. Both need to be stored in the main SQLite3 DB.
In the 30 - MVC Server sample, the search will be performed as such:
if scop^.GetAsRawUTF8('match',match) and fHasFTS thenbeginif scop^.GetAsDouble('lastrank',rank) then
whereClause := 'and rank<? ';
whereClause := 'join (select docid,rank(matchinfo(ArticleSearch),1.0,0.7,0.5) as rank '+'from ArticleSearch where ArticleSearch match ? '+whereClause+'order by rank desc limit 100) as r on (r.docid=Article.id)';
articles := RestModel.RetrieveDocVariantArray(
TSQLArticle,'',whereClause,[match,rank],
'id,title,tags,author,authorname,createdat,abstract,contenthtml,rank');
In the above query expression, the rank() function is used over the detailed FTS4 search statistics returned by matchinfo(), using a 1.0 weight for any match in the Title column, 0.7 for the Abstract column, and 0.5 for Content. The matching articles content is then returned in an articles:TDocVariant array, ready to be rendered on the web page.
7.2.5. Column collations
In any database, there is a need to define how column data is to be compared. It is needed for proper search and ordering of the data. This is the purpose of so-called collations.
By default, when SQLite compares two strings, it uses a collating sequence or collating function (two words for the same thing) to determine which string is greater or if the two strings are equal. SQLite has three built-in collating functions: BINARY, NOCASE, and RTRIM:
BINARY - Compares string data using memcmp(), regardless of text encoding.
NOCASE - The same as binary, except the 26 upper case characters of ASCII are folded to their lower case equivalents before the comparison is performed. Note that only ASCII characters are case folded. Plain SQLite does not attempt to do full Unicode case folding due to the size of the tables required - but you could use mORMot's SYSTEMNOCASE, or WIN32CASE/WIN32NOCASE custom collations for enhanced case folding support (see below);
RTRIM - The same as binary, except that trailing space characters are ignored.
In the mORMot ORM, we defined some additional kind of collations, via some internal calls to the sqlite3_create_collation() API:
TSQLFieldType
Default collation
sftAnsiText
NOCASE
sftUTF8Text
SYSTEMNOCASE, i.e. using UTF8ILComp(), which will ignore Win-1252 Latin accents
ISO8601, i.e. decoding the text into a date/time value before comparison
sftObject sftVariant
BINARY, since it is stored as plain JSON content
sftBlob sftBlobDynArray sftBlobCustom
BINARY
You can override those default collation schemes by calling TSQLRecordProperties. SetCustomCollationForAll() (which will override all fields collation for a given type) or SetCustomCollation() method (which will override a given field) in an overridden class procedure InternalRegisterCustomProperties() or InternalDefineModel(), so that it will be common to all database models, for both client and server, every time the corresponding TSQLRecord is used.
The following collations are therefore available when using SQLite3 within the mORMot ORM:
Collation
Description
BINARY
Default memcmp() comparison
NOCASE
Default ASCII 7 bit comparison
RTRIM
Default memcmp() comparison with right trim
SYSTEMNOCASE
mORMot's Win-1252 8 bit comparison
ISO8601
mORMot's date/time comparison
WIN32CASE
mORMot's comparison using case-insensitive Windows API
WIN32NOCASE
mORMot's comparison using not case-insensitive Windows API
Note that WIN32CASE/WIN32NOCASE will be slower than the others, but will handle properly any kind of complex scripting. For instance, if you want to use the Unicode-ready Windows API at database level, you can set for each database model:
Note that the ORM won't change the collation once the table is created, since SQLite3 itself does not support this directly. Its REINDEX command is somewhat useful if you need to change the collation function implementation, but it won't help directly to change the collation itself on a given column...
On non-Windows platform, it will either use the system ICU library (if available), or fallback to the FPC RTL with temporary UnicodeString values - which requires to include `cwstrings` in your project uses clause. Note that depending on the library used, the results may not be consistent: so if you move a SQLite3 database file e.g. from a Windows system to a Linux system with WIN32CASE collation, you should better regenerate all your indexes!
If you use our non-standard/extended collations (i.e. SYSTEMNOCASE/ISO8601/WIN32CASE/WIN32NOCASE), you may have trouble running requests with "plain" SQLite3 tools. But you can use our SynDBExplorer safely, since it will declare all the above collations.
When using external databases - see below, if the content is retrieved directly from the database driver and by-passes the virtual table mechanism - see below, returned data may not match your expectations according to the custom collations: you will need to customize the external tables definition by hand, with the proper SQL statement of each external DB engine.
Note that mORMot 2 offers a new UNICODENOCASE collation, which follows Unicode 10.0 without any Windows or ICU API call, so is consistent on all systems - and is also faster.
7.2.6. REGEXP operator
Our SQLite3 engine can use regular expression within its SQL queries, by enabling the REGEXP operator in addition to standard SQL operators (= == != <> IS IN LIKE GLOB MATCH).
7.2.6.1. Default REGEXP Engine
By default, and since mORMot 1.18.6218 (25 January 2021), our static SQlite3 engine includes a compact and efficient enough C extension, as available from the official SQLite3 project source code tree. It is included with the official amalgamation file during our compilation phase.
So you don't need to do anything to be able to use the REGEX operator in your queries:
The above code will execute the following SQL statement (with a prepared parameter for the regular expression itself):
SELECT * from People WHERE Firstname REGEXP '\bFinley\b';
That is, it will find all objects where TSQLRecordPeople.FirstName will contain the 'Finley' word - in a regular expression, \b defines a word boundary search.
In fact, the REGEXP operator is a special syntax for the regexp() user function. No regexp() user function is defined by default and so use of the REGEXP operator will normally result in an error message. Calling CreateRegExFunction() for a given connection will add a SQL function named "regexp()" at run-time, which will be called in order to implement the REGEXP operator.
7.2.6.2. PCRE REGEXP Engine
If you want to use the Open Source PCRE library to perform the searches, instead of this default C extension, you should include the SynSQLite3RegEx.pas unit to your uses clause, and register the RegExp() SQL function to a given SQLite3 database instance, as such:
This unit will call directly the UTF-8 API of the PCRE library, and maintain a per-connection cache of compiled regular expressions to ensure the best performance possible.
7.2.7. ACID and speed
As stated above in Data access benchmark, the default SQlite3 write speed is quite slow, when running on a normal hard drive. By default, the engine will pause after issuing a OS-level write command. This guarantees that the data is written to the disk, and features the ACID properties of the database engine.
ACID is an acronym for "Atomicity Consistency Isolation Durability" properties, which guarantee that database transactions are processed reliably: for instance, in case of a power loss or hardware failure, the data will be saved on disk in a consistent way, with no potential loss of data.
In SQLite3, ACID is implemented by two means at file level:
Synchronous writing: it means that the engine will wait for any written content to be flushed to disk before processing the next request;
File locking: it means that the database file is locked for exclusive use during writing, allowing several processes to access the same database file concurrently.
Changing these default settings can ensure much better writing performance.
7.2.7.1. Synchronous writing
You can overwrite the default ACID behavior by setting the TSQLDataBase.Synchronous property to smOff instead of the default smFull setting. When Synchronous is set to smOff, SQLite continues without syncing as soon as it has handed data off to the operating system. If the application running SQLite crashes, the data will be safe, but the database might become corrupted if the operating system crashes or the computer loses power before that data has been written to the disk surface. On the other hand, some operations are as much as 50 or more times faster with this setting.
When the tests performed during Data access benchmark use Synchronous := smOff, "Write one" speed is enhanced from 8-9 rows per second into about 400 rows per second, on a physical hard drive (SSD or NAS drives may not suffer from this delay).
So depending on your application requirements, you may switch Synchronous setting to off.
To change the main SQLite3 engine synchronous parameter, you may code for instance:
Never forget that you may have several SQlite3 engines within a single mORMot server!
7.2.7.2. File locking
You can overwrite the first default ACID behavior by setting the TSQLDataBase.LockingMode property to lmExclusive instead of the default lmNormal setting. When LockingMode is set to lmExclusive, SQLite will lock the database file for exclusive use during the whole session. It will prevent other processes (e.g. database viewer tools) to access the file at the same time, but small write transactions will be much faster, by a factor usually greater than 40. Bigger transactions involving several hundredths/thousands of INSERT won't be accelerated - but individual insertions will have a major speed up - see Data access benchmark.
To change the main SQLite3 engine locking mode parameter, you may code for instance:
In fact, exclusive file locking improves the reading speed by a factor of 4 (in case of individual row retrieval). As such, defining LockingMode := lmExclusive without Synchronous := smOff could be of great benefit for a server which purpose is mainly to serve ORM content to clients.
7.2.7.3. Performance tuning
By default, the slow but truly ACID setting will be used with mORMot, just as with SQlite3. We do not change this policy, since it will ensure best safety, in the expense of slow writing outside a transaction.
The best performance will be achieved by combining the two previous options, as such:
If you can afford loosing some data in very rare border case, or if you are sure your hardware configuration is safe (e.g. if the server is connected to a power inverter and has RAID disks) and that you have backups at hand, setting Synchronous := smOff will help your application scale for writing. Setting LockingMode := lmExclusive will benefit of both writing and reading speed. Consider using an external and dedicated database (like Firebird, Oracle, PostgreSQL, MySQL, DB2, Informix or MS SQL) if your security expectations are very high, and if the default safe but slow setting is not enough for you.
7.2.8. Database backup
In all cases, do not forget to perform backups of your SQlite3 database as often as possible (at least several times a day). Adding a backup feature on the server side is as simple as running:
The above line will perform a background live backup of the main SQLite3 database, by steps of 1024 pages (i.e. it will process 1 MB per step, since default page size is 1024 bytes), performing a little sleep of 10 milliseconds between each 1 MB copy step, allowing main CRUD / ORM operations to continue uninterrupted during the backup. You can even specify an OnProgress: TSQLDatabaseBackupEvent callback event, to monitor the backup process.
The same backup process can be used e.g. to save an in-memory SQLite3 database into a SQLite3 file, as such:
if aInMemoryDB.BackupBackground('backup.db3',-1,0,nil) then
aInMemoryDB.BackupBackgroundWaitUntilFinished;
Above code will save the aInMemoryDB database into the 'backup.db3' file.
7.3. Virtual Tables magic
The SQlite3 engine has the unique ability to create Virtual Tables from code. From the perspective of an SQL statement, the virtual table object looks like any other table or view. But behind the scenes, queries from and updates to a virtual table invoke callback methods on the virtual table object instead of reading and writing to the database file.
The virtual table mechanism allows an application to publish interfaces that are accessible from SQL statements as if they were tables. SQL statements can in general do anything to a virtual table that they can do to a real table, with the following exceptions:
One cannot create a trigger on a virtual table.
One cannot create additional indices on a virtual table. (Virtual tables can have indices but that must be built into the virtual table implementation. Indices cannot be added separately using CREATE INDEX statements.)
One cannot run ALTER TABLE ... ADD COLUMN commands against a virtual table.
Particular virtual table implementations might impose additional constraints. For example, some virtual implementations might provide read-only tables. Or some virtual table implementations might allow INSERT or DELETE but not UPDATE. Or some virtual table implementations might limit the kinds of UPDATEs that can be made.
Example of virtual tables, already included in the SQLite3 engine, are FTS or RTREE tables.
Our framework introduces new types of custom virtual table. You'll find classes like TSQLVirtualTableJSON or TSQLVirtualTableBinary which handle in-memory data structures. Or it might represent a view of data on disk that is not in the SQLite3 format (e.g. TSQLVirtualTableLog). It can be used to access any external database, just as if they were native SQLite3 tables - see below. Or the application might compute the content of the virtual table on demand.
Thanks to the generic implementation of Virtual Table in SQLite3, you can use such tables in your SQL statement, and even safely execute a SELECT statement with JOIN or custom functions, mixing normal SQLite3 tables and any other Virtual Table. From the ORM point of view, virtual tables are just tables, i.e. they inherit from TSQLRecordVirtual, which inherits from the common base TSQLRecord class.
7.3.1. Virtual Table module classes
A dedicated mechanism has been added to the framework, beginning with revision 1.13, in order to easily add such virtual tables with pure Delphi code.
In order to implement a new Virtual Table type, you'll have to define a so called Module to handle the fields and data access and an associated Cursor for the SELECT statements. This is implemented by the two TSQLVirtualTable and TSQLVirtualTableCursor classes as defined in the mORMot.pas unit.
For instance, here are the default Virtual Table classes deriving from those classes:
Virtual Tables classes hierarchyTSQLVirtualTableJSON, TSQLVirtualTableBinary and TSQLVirtualTableCursorJSON classes will implement a Virtual Table using a TSQLRestStorageInMemory instance to handle fast in-memory static databases. Disk storage will be encoded either as UTF-8 JSON (for the TSQLVirtualTableJSON class, i.e. the 'JSON' module), or in a proprietary SynLZ compressed format (for the TSQLVirtualTableBinary class, i.e. the 'Binary' module). File extension on disk will be simply .json for the 'JSON' module, and .data for the 'Binary' module. Just to mention the size on disk difference, the 502 KB People.json content (as created by included regression tests) is stored into a 92 KB People.data file, in our proprietary optimized format.
Note that the virtual table module name is retrieved from the class name. For instance, the TSQLVirtualTableJSON class will have its module named as 'JSON' in the SQL code.
As you probably have already stated, all those Virtual Table mechanism is implemented in mORMot.pas. Therefore, it is independent from the SQLite3 engine, even if, to my knowledge, there is no other SQL database engine around able to implement this pretty nice feature.
7.3.2. Defining a Virtual Table module
Here is how the TSQLVirtualTableLog class type is defined, which will implement a Virtual Table module named "Log". Note that the SQLite3 virtual table module name will be computed from the class name, trimming its first characters, e.g. TSQLVirtualTableLog will trim trailing TSQLVirtualTable and define a 'Log' virtual module.
Adding a new module is just made by overriding some Delphi methods:
The supplied feature set defines a read-only module (since vtWrite is not selected), and vtWhereIDPrepared indicates that any RowID=? SQL statement will be handled as such in the cursor class (we will use the log row as ID number, start counting at 1, so we can speed up RowID=? WHERE clause easily). The associated cursor class is returned. And a TSQLRecord class is specified, to define the handled fields - its published properties definition will be used by the inherited Structure method to specify to the SQLite3 engine which kind of fields are expected in the SQL statements:
TSQLRecordLogFile = class(TSQLRecordVirtualTableAutoID)
protected
fContent: RawUTF8;
fDateTime: TDateTime;
fLevel: TSynLogInfo;
published/// the log event time stampproperty DateTime: TDateTime read fDateTime;
/// the log event levelproperty Level: TSynLogInforead fLevel;
/// the textual message associated to the log eventproperty Content: RawUTF8read fContent;
end;
You could have overridden the Structure method in order to provide the CREATE TABLE SQL statement expected. But using Delphi class RTTI allows the construction of this SQL statement with the appropriate column type and collation, common to what the rest of the ORM will expect.
Of course, this RecordClass property is not mandatory. For instance, the TSQLVirtualTableJSON.GetTableModuleProperties method won't return any associated TSQLRecordClass, since it will depend on the table it is implementing, i.e. the running TSQLRestStorageInMemory instance. Instead, the Structure method is overridden, and will return the corresponding field layout of each associated table.
Here is how the Prepare method is implemented, and will handle the vtWhereIDPrepared feature:
functionTSQLVirtualTable.Prepare(var Prepared: TSQLVirtualTablePrepared): boolean;
begin
result := Self<>nil;
if result thenif (vtWhereIDPrepared in fModule.Features) andPrepared.IsWhereIDEquals(true) thenwith Prepared.Where[0] dobegin// check ID=?
Value.VType := varAny; // mark TSQLVirtualTableCursorJSON expects it
OmitCheck := true;
Prepared.EstimatedCost := 1;
endelse
Prepared.EstimatedCost := 1E10; // generic high costend;
Then here is how each 'log' virtual table module instance is created:
It only associates a TSynLogFile instance according to the supplied file name (our SQL CREATE VIRTUAL TABLE statement only expects one parameter, which is the .log file name on disk - if this file name is not specified, it will use the SQL table name instead).
Since this class inherits from TSQLVirtualTableCursorIndex, it will have the generic fCurrent / fMax protected fields, and will have the HasData, Next and Search methods using those properties to handle navigation throughout the cursor.
The overridden Search method consists only in:
functionTSQLVirtualTableCursorLog.Search(
const Prepared: TSQLVirtualTablePrepared): boolean;
begin
result := inherited Search(Prepared); // mark EOF by defaultif result thenbegin
fMax := TSQLVirtualTableLog(Table).fLogFile.Count-1;
if Prepared.IsWhereIDEquals(false) thenbegin
fCurrent := Prepared.Where[0].Value.VInt64-1; // ID=? -> index := ID-1if cardinal(fCurrent)<=cardinal(fMax) then
fMax := fCurrent else// found one
fMax := fCurrent-1; // out of range IDend;
end;
end;
The only purpose of this method is to handle RowID=? statement SELECT WHERE clause, returning fCurrent=fMax=ID-1 for any valid ID, or fMax<fCurrent, i.e. no result if the ID is out of range. In fact, the Search method of the cursor class must handle all cases which has been notified as handled during the call to the Prepare method. In our case, since we have set the vtWhereIDPrepared feature and the Prepare method identified it in the request and set the OmitCheck flag, our Search method MUST handle the RowID=? case.
If the WHERE clause is not RowID=? (i.e. if Prepared.IsWhereIDEquals returns false), it will return fCurrent=0 and fMax=fLogFile.Count-1, i.e. it will let the SQLite3 engine loop through all rows searching for the data.
Each column value is retrieved by this method:
functionTSQLVirtualTableCursorLog.Column(aColumn: integer;
var aResult: TVarData): boolean;
var LogFile: TSynLogFile;
begin
result := false;
if (self=nil) or (fCurrent>fMax) then
exit;
LogFile := TSQLVirtualTableLog(Table).fLogFile;
if LogFile=nilthen
exit;
case aColumn of
-1: SetColumn(aResult,fCurrent+1); // ID = index + 1
0: SetColumn(aResult,LogFile.EventDateTime(fCurrent));
1: SetColumn(aResult,ord(LogFile.EventLevel[fCurrent]));
2: SetColumn(aResult,LogFile.LinePointers[fCurrent],LogFile.LineSize(fCurrent));
else exit;
end;
result := true;
end;
As stated by the documentation of the TSQLVirtualTableCursor class, -1 is the column index for the RowID, and then will follow the columns as defined in the text returned by the Structure method (in our case, the DateTime, Level, Content fields of TSQLRecordLogFile).
The SetColumn overloaded methods can be used to set the appropriate result to the aResult variable. For UTF-8 text, it will use a temporary in-memory space, to ensure that the text memory will be still available at least until the next Column method call.
7.3.3. Using a Virtual Table module
From the low-level SQLite3 point of view, here is how this "Log" virtual table module can be used, directly from the SQLite3 engine.
First we will register this module to a DB connection (this method is to be used only in case of such low-level access - in our ORM you should never call this method, but TSQLModel. VirtualTableRegister instead, cf. next paragraph):
Then we can execute the following SQL statement to create the virtual table for the Demo database connection:
Demo.Execute('CREATE VIRTUAL TABLE test USING log(temptest.log);');
This will create the virtual table. Since all fields are already known by the TSQLVirtualTableLog class, it is not necessary to specify the fields at this level. We only specify the log file name, which will be retrieved by TSQLVirtualTableLog. Create constructor.
Demo.Execute('select count(*) from test',Res);
Check(Res=1);
s := Demo.ExecuteJSON('select * from test');
s2 := Demo.ExecuteJSON('select * from test where rowid=1');
s3 := Demo.ExecuteJSON('select * from test where level=3');
You can note that there is no difference with a normal SQLite3 table, from the SQL point of view. In fact, the full power of the SQL language as implemented by SQLite3 - see http://sqlite.org/lang.html - can be used with any kind of data, if you define the appropriate methods of a corresponding Virtual Table module.
7.3.4. Virtual Table, ORM and TSQLRecord
The framework ORM is able to use Virtual Table modules, just by defining some TSQLRecord, inheriting from some TSQLRecordVirtual dedicated classes:
Custom Virtual Tables records classes hierarchyTSQLRecordVirtualTableAutoID children can be defined for Virtual Table implemented in Delphi, with a new ID generated automatically at INSERT.
TSQLRecordLogFile was defined to map the column name as retrieved by the TSQLVirtualTableLog ('log') module, and should not to be used for any other purpose.
The Virtual Table module associated from such classes is retrieved from an association made to the server TSQLModel. In a Client-Server application, the association is not needed (nor to be used, since it may increase code size) on the Client side. But on the server side, the TSQLModel. VirtualTableRegister method must be called to associate a TSQLVirtualTableClass (i.e. a Virtual Table module implementation) to a TSQLRecordVirtualClass (i.e. its ORM representation).
For instance, the following code will register two TSQLRecord classes, the first using the 'JSON' virtual table module, the second using the 'Binary' module:
This registration should be done on the Server side only, before calling TSQLRestServer.Create (or TSQLRestClientDB.Create, for a stand-alone application). Otherwise, an exception is raised at virtual table creation.
Why use such a database type, when you can create a SQLite3 in-memory table, using the :memory: file name? That is the question...
SQlite3 in-memory tables are not persistent, whereas our JSON or Binary virtual table modules can be written on disk on purpose, if the aServer.StaticVirtualTable[aClass].CommitShouldNotUpdateFile property is set to true - in this case, file writing should be made by calling explicitly the aServer.StaticVirtualTable[aClass].UpdateToFile method;
SQlite3 in-memory tables will need two database connections, or call to the ATTACH DATABASE SQL statement - both of them are not handled natively by our Client-Server framework;
SQlite3 in-memory tables are only accessed via SQL statements, whereas TSQLRestStorageInMemory tables can have faster direct access for most common RESTful commands (GET / POST / PUT / DELETE individual rows) - this could make a difference in server CPU load, especially with the Batch feature of the framework;
On the server side, it could be very convenient to have a direct list of in-memory TSQLRecord instances to work with in pure Delphi code; this is exactly what TSQLRestStorageInMemory allows, and definitively makes sense for an ORM framework;
On the client or server side, you could create calculated fields easily with TSQLRestStorageInMemory dedicated "getter" methods written in Delphi, whereas SQlite3 in-memory tables will need additional SQL coding;
SQLite3 tables are stored in the main database file - in some cases, it could be much convenient to provide some additional table content in some separated database file (for a round robin table, a configuration table written in JSON, some content to be shared among users...): this is made possible using our JSON or Binary virtual table modules (but, to be honest, the ATTACH DATABASE statement could provide a similar feature);
The TSQLRestStorageInMemory class can be used stand-alone, i.e. without the SQLite3 engine so it could be used to produce small efficient server software - see the "SQLite3\Samples\01 - In Memory ORM" folder.
7.3.5.1. In-Memory tables
A first way of using static tables, independently from the SQLite3 engine, is to call the TSQLRestServer. StaticDataCreate method.
This method is only to be called server-side, of course. For the Client, there is no difference between a regular and a static table.
The in-memory TSQLRestStorageInMemory instance handling the storage can be accessed later via the StaticDataServer[] property array of TSQLRestServer.
As we just stated, this primitive but efficient database engine can be used without need of the SQLite3 database engine to be linked to the executable, saving some KB of code if necessary. It will be enough to handle most basic RESTful requests.
7.3.5.2. In-Memory virtual tables
A more advanced and powerful way of using static tables is to define some classes inheriting from TSQLRecordVirtualTableAutoID, and associate them with some TSQLVirtualTable classes. The TSQLRecordVirtualTableAutoID parent class will specify that associated virtual table modules will behave like normal SQLite3 tables, so will have their RowID property computed at INSERT).
For instance, the supplied regression tests define such two tables with three columns, named FirstName, YearOfBirth and YearOfDeath, after the published properties definition:
Thanks to the VirtualTableRegister calls, on the server side, the 'JSON' and 'Binary' Virtual Table modules will be launched automatically when the SQLite3 DB connection will be initialized:
This TSQLRestClientDB has in fact a TSQLRestServerDB instance running, which will be used for all Database access, including Virtual Table process.
Two files will be created on disk, named 'Dali1.json' and 'Dali2.data'. As stated above, the JSON version will be much bigger, but also more easy to handle from outside the application.
From the code point of view, there is no difference in our ORM with handling those virtual tables, compared to regular TSQLRecord tables. For instance, here is some code extracted from the supplied regression tests:
if aClient.TransactionBegin(TSQLRecordDali1) thentry// add some items to the file
V2.FillPrepare(aClient,'LastName=:("Dali"):');
n := 0;
while V2.FillOne dobegin
VD.FirstName := V2.FirstName;
VD.YearOfBirth := V2.YearOfBirth;
VD.YearOfDeath := V2.YearOfDeath;
inc(n);
Check(aClient.Add(VD,true)=n,Msg);end;
// update some items in the filefor i := 1 to n dobeginCheck(aClient.Retrieve(i,VD),Msg);
Check(VD.ID=i);
Check(IdemPChar(pointer(VD.FirstName),'SALVADOR'));
Check(VD.YearOfBirth=1904);
Check(VD.YearOfDeath=1989);
VD.YearOfBirth := VD.YearOfBirth+i;
VD.YearOfDeath := VD.YearOfDeath+i;
Check(aClient.Update(VD),Msg);end;
// check SQL requestsfor i := 1 to n dobeginCheck(aClient.Retrieve(i,VD),Msg);
Check(VD.YearOfBirth=1904+i);
Check(VD.YearOfDeath=1989+i);
end;
Check(aClient.TableRowCount(TSQLRecordDali1)=1001);aClient.Commit;exceptaClient.RollBack;end;
A Commit is needed from the Client side to write anything on disk. From the Server side, in order to create disk content, you'll have to explicitly call such code on purpose:
Please note that the SQlite3 engine will handle any Virtual Table just like regular SQLite3 tables, concerning the atomicity of the data. That is, if no explicit transaction is defined (via TransactionBegin / Commit methods), such a transaction will be performed for every database modification (i.e. all CRUD operations, as INSERT / UPDATE / DELETE). The TSQLRestStorageInMemory. UpdateToFile method is not immediate, because it will write all table data each time on disk. It is therefore mandatory, for performance reasons, to nest multiple modification to a Virtual Table with such a transaction, for better performance. And in all cases, it is the standard way of using the ORM. If for some reason, you later change your mind and e.g. move your table from the TSQLVirtualTableJSON / TSQLVirtualTableBinary engine to the default SQlite3 engine, your code could remain untouched.
It is possible to force the In-Memory virtual table data to stay in memory, and the COMMIT statement to write nothing on disk, using the following property:
Since StaticVirtualTable property is only available on the Server side, you are the one to blame if your client updates the table data and this update never reaches the disk!
7.3.5.3. In-Memory and ACID
For data stored in memory, the TSQLRestStorageInMemory table is ACID. It means that concurrent access will be consistent and work safely, as expected.
On disk, this kind of table is ACID only when its content is written to the file. I mean, the whole file which will be written in an ACID way. The file will always be consistent.
The exact process of these in-memory tables is that each time you write some new data to a TSQLRestStorageInMemory table:
It will be ACID in memory (i.e. work safely in concurrent mode);
Individual writes (INSERT/UPDATE/DELETE) won't automatically be written to file;
COMMIT will by default write the whole table to file (either as JSON or compressed binary);
COMMIT won't write the data to file if the CommitShouldNotUpdateFile property is set to TRUE;
ROLLBACK process won't do anything, so won't be ACID - but since your code may later use a real RDBMS, it is a good habit to always write the command, like in the sample code above, as except aClient.RollBack.
When you write the data to file, the whole file is rewritten: it seems not feasible to write the data to disk at every write - in this case, SQLite3 in exclusive mode will be faster, since it will write only the new data, not the whole table content.
This may sound like a limitation, but on our eyes, it could be seen more like a feature. For a particular table, we do not need nor want to have a whole RDBMS/SQL engine, just direct and fast access to a TObjectList. The feature is to integrate it with our REST engine, and still be able to store your data in a regular database later (SQLite3 or external), if it appears that TSQLRestStorageInMemory storage is too limited for your process.
7.3.6. Redirect to an external TSQLRest
Sometimes, having all database process hosted in a single process may not be enough. You can use the TSQLRestServer.RemoteDataCreate() method to instantiate a TSQLRestStorageRemote class which will redirect all ORM operation to a specified TSQLRest instance, may be remote (via TSQLRestClientHttp) or in-process (TSQLRestServer). REST redirection may be enough in simple use cases, when full Master/slave replication could be oversized.
For instance, in TTestExternalDatabase regression tests, you will find the following code:
It will create two SQLite3 databases, one main "testExternal.db3", and a separated "history.db3" database. Both will use synch off and lock exclusive access mode - see ACID and speed just above.
In the "history.db3" file, there will be the MyHistory table, whereas in testExternal.db3", there won't be any MyHistory table. All TSQLRecordMyHistory CRUD process will be transparently redirected to historyDB.
Then any ORM access from the main aExternalClient to the TSQLRecordMyHistory table via will be redirected, via an hidden TSQLRestStorageRemote instance, to historyDB. There won't be any noticeable performance penalty - on the contrary a separated database will be much better.
An alternative may have been to use the ATTACH TABLE statement at SQLite3 level, but it will have been only locally, and you will not be able to switch to another database engine. Whereas the RemoteDataCreate() method is generic, and will work with external databases - see below, even NoSQL databases - see below, or remote mORMot servers, accessible via a TSQLRestClientHTTP instance. The only prerequirement is that all TSQLRecord classes in the main model do exist in the redirected database model.
Note that the redirected TSQLRest instance can have its own model, its own authentication and authorization scheme, its own caching policy. It may be of great interrest when tuning your application. Be aware that if you use TRecordReference published fields, the model should better be shared among the local and redirected TSQLRest instances, or at least the TSQLRecord classes should have the same order - otherwise the TRecordReference values will point to the wrong table, depending on the side the query is run.
One practical application of this redirection pattern may be with a typical corporate business. There may be a main mORMot server, at corporation headquarters, then local mORMot servers at each branch office, hosting applications for end users on the local network:
Corporate Servers RedirectionEach branch office may have its own TSQLRecord dedicated table, with all its data. Some other tables will be shared among local offices, like global configuration. Creating a dedicated table can be done in Delphi code by creating your own class type:
type
TSQLRecordCustomerAbstract = class// never declared in Model
.... // here the fields used for Customer businessend;
TSQLRecordCustomerA = class(TSQLRecordCustomerAbstract); // for office A
TSQLRecordCustomerB = class(TSQLRecordCustomerAbstract); // for office B
TSQLRecordCustomerClass = classof TSQLRecordCustomerAbstract;
Here, TSQLRecordCustomerA may be part only of the Office A server's TSQLModel, and TSQLRecordCustomerB only of the Office B server's TSQLModel. It will increase security, and, in the main headequarters server, both TSQLRecordCustomerA and TSQLRecordCustomerB classes will be part of the TSQLModel, and dedicated interface-based services will be able to publish some high-level data and statistics about all stored tables. Then you can use a TSQLRecordCustomerClass variable in your client code, which will contain either TSQLRecordCustomerA or TSQLRecordCustomerB, depending on the place it runs on, and the server it is connected to. On the main server, each office will have its own storage table in the (external) database, named CustomerA or CustomerB.
You will benefit of the caching abilities - see below - of each TSQLRest instance. You may have some cache tuned at a local site, whereas the cache in the main database will remain less aggressive, but safer.
Furthermore, even on a single-siter server, a TSQLRecordHistory table, or more generaly any aggregation data may benefit to be hosted locally or on cheap storage, whereas the main database will stay on SSD or SAS. Thanks to this redirection feature, you can tune your hosting as expected.
Finally, if your purpose is to redirect all tables of a given TSQLRestServer to another remote TSQLRestServer (for security or hosting purpose), you may consider using TSQLRestServerRemoteDB instead. This class will redirect all tables to one external instance.
Note that both TSQLRestStorageRemote and TSQLRestServerRemoteDB classes do not support yet the Virtual Tables mechanism of SQlite3. So if you use those features, you may not be able to run JOINed queries from the redirected instance: in fact, the main SQlite3 engine will complain about a missing MyHistory table in "testExternal.db3". We will eventually define the needed TSQLVirtualTableRemote and TSQLVirtualTableCursorRemote classes to implement this feature.
Sadly, this redirection pattern won't work if the connection is lost. The main office server needs to be always accessible so that the local offices continue to work. You may consider using Master/slave replication to allow the local offices to work with their own local copy of the master data. Replication sounds in fact preferred than simple redirection, especially in terms of network and resource use, in some cases.
7.3.7. Virtual Tables to access external databases
As will be stated below, some external databases may be accessed by our ORM.
The Virtual Table feature of SQLite3 will allow those remote tables to be accessed just like "native" SQLite3 tables - in fact, you may be able e.g. to write a valid SQL query with a JOIN between SQlite3 tables, MS SQL Server, MySQL, FireBird, PostgreSQL, MySQL, DB2, Informix and Oracle databases, even with multiple connections and several remote servers. Think as an ORM-based Business Intelligence from any database source. Added to our code-based reporting engine (able to generate pdf), it could be a very powerful way of consolidating any kind of data.
In order to define such external tables, you define your regular TSQLRecord classes as usual, then a call to the VirtualTableExternalRegister() or VirtualTableExternalMap() functions will define this class to be managed as a virtual table, from an external database engine. Using a dedicated external database server may allow better response time or additional features (like data sharing with other applications or languages). Server-side may omit a call to VirtualTableExternalRegister() if the need of an internal database is expected: it will allow custom database configuration at runtime, depending on the customer's expectations (or license).
7.3.8. Virtual tables from the client side
For external databases - see below - the SQL conversion will be done on the fly in a more advanced way, so you should be able to work with such virtual tables from the client side without any specific model notification. In this case, you can safely define your tables as TSQLValue1 = class(TSQLRecord), with no further code on client side.
When working with static (in-memory / TObjectList) storage, if you expect all ORM features to work remotely, you need to notify the Client-side model that a table is implemented as virtual. Otherwise you may encounter some SQL errors when executing requests, like "no such column: ID".
For instance, imagine you defined two in-memory JSON virtual tables on Server side:
type
TSQLServer = class(TSQLRestServerDB)
private
FHttpServer: TSQLHttpServer;
publicconstructor Create;
destructor Destroy; override;
end; constructor TSQLServer.Create;
var aModel: TSQLModel;
begin
aModel := CreateModel;
aModel.VirtualTableRegister(TSQLValue1, TSQLVirtualTableJSON);aModel.VirtualTableRegister(TSQLValue2, TSQLVirtualTableJSON);
aModel.Owner := self; // model will be released with TSQLServer instanceinherited Create(aModel, ChangeFileExt(ParamStr(0), '.db'), True);
Self.CreateMissingTables(0);
FHttpServer:= TSQLHttpServer.Create('8080', Self);
end; destructor TSQLServer.Destroy;
begin
FHttpServer.Free;
inherited;
end;
You will need to specify also on the client side that those TSQLValue1 and TSQLValue2 tables are virtual.
Or, in case the table is defined as TSQLValue1 = class(TSQLRecord), the client model could be updated as such:
type
TSQLClient = class(TSQLHttpClient)
publicconstructor Create;
end; constructor TSQLClient.Create;
var aModel: TSQLModel;
begin
aModel:= CreateModel;
aModel.Props[TSQLValue1].Kind := rCustomAutoID;aModel.Props[TSQLValue2].Kind := rCustomAutoID;
aModel.Owner := self; // model will be released within TSQLServer instanceinherited Create('127.0.0.1', '8080', aModel);
SetUser('Admin', 'synopse');
end;
Or, in case the table is defined as TSQLValue1 = class(TSQLRecord), perhaps the easiest way of doing it, is to set the property when creating the shared model:
function CreateModel: TSQLModel;
begin
result:= TSQLModel.Create([TSQLAuthGroup, TSQLAuthUser, TSQLValue1, TSQLValue2]);
result.Props[TSQLValue1].Kind := rCustomAutoID;result.Props[TSQLValue2].Kind := rCustomAutoID;end;
Once again, this restriction does not apply to below.
8. External SQL database access
Adopt a mORMotOur ORM RESTful framework is able to access most available database engines, via a set of generic units and classes. Both SQL and NoSQL engines could be accessed - quite a unique feature in the ORM landscape (in Delphi, of course, but also in Java or C# environments).
Remember the diagram introducing mORMot's Database layer:
mORMot Persistence Layer Architecture
The framework still relies on SQLite3 as its SQL core on the server, but a dedicated mechanism allows access to any remote database, and mixes those tables content with the native ORM tables of the framework. Thanks to the unique Virtual Tables magic mechanism of SQLite3, those external tables may be accessed as native SQLite3 tables in our SQL statements, even for NoSQL engines.
Mode
Engines
SQL
SQLite3, Oracle, NexusDB, MS SQL, Jet/MSAccess, FireBird, MySQL, PostgreSQL, IBM DB2, IBM Informix See below
You can even mix databases, i.e. the same mORMot ORM could persist, at the same time, its data in several databases, some TSQLRecord as fast internal SQLite3 tables or as TObjectList, others in a PostgreSQL database (tied to an external reporting/SAP engine), and e.g. flat consolidated data in a MongoDB instance.
8.1. SynDB direct RDBMS access
External Relational Database Management System (RDBMS) can be accessed via our SynDB.pas units. Then, the framework ORM is able to access them via the mORMotDB.pas bridge unit. But you can use the SynDB.pas units directly, without any link to our ORM.
The current list of handled data access libraries is:
This list is not closed, and may be completed in the near future. Any help is welcome here: it is not difficult to implement a new unit, following the patterns already existing. You may start from an existing driver (e.g. Zeos or Alcinoe libraries). Open Source contribution are always welcome!
Thanks to the design of our SynDB.pas classes, it was very easy (and convenient) to implement SQLite3 direct access. It is even used for our regression tests, in order to implement stand-alone unitary testing.
An Oracle dedicated direct access was added, because all available OleDB providers for Oracle (i.e. both Microsoft's and Oracle's) do have problems with handling BLOB, and we wanted our Clients to have a light-weight and as fast as possible access to this great database.
In fact, OleDB is a good candidate for database access with good performance, Unicode native, with a lot of available providers. Thanks to OleDB, we are already able to access to almost any existing database. The code overhead in the server executable will also be much less than with adding any other third-party Delphi library. And we will let Microsoft or the OleDB provider perform all the testing and debugging for each driver.
Since revision 1.17, direct access to the ODBC layer has been included to the framework database units. It has a wider range of free providers (including e.g. MySQL or FireBird), and is the official replacement for OleDB (next version of MS SQL Server will provide only ODBC providers, as far as Microsoft warned its customers).
Since revision 1.18, any ZeosLib / ZDBC driver can be used, with fast direct access to the underlying RDBMS client library. Since the ZDBC library does not rely on DB.pas, and by-passes the slow TDataSet component, its performance is very high. The ZDBC maintainers did a lot of optimizations, especially to work with mORMot, and this library is a first-class citizen to work with our framework.
Since the same 1.18 revision, DB.pas can be used with our SynDB.pas classes. Of course, using TDataset as intermediate layer will be slower than the SynDB.pas direct access pattern. But it will allow you to re-use any existing (third-party) database connection driver, which could make sense in case of evolution of an existing application, or to use an unsupported database engine.
Last but not least, the SynDBRemote.pas unit allows you to create database applications that perform SQL operations on a remote SynDB HTTP server, instead of a database server. You can create connections just like any other SynDB database, but the transmission will take place over HTTP, with no need to install a database client with your application - see below.
The following connections are therefore possible:
SynDB ArchitectureThis diagram is a bit difficult to follow at the latest level - but you got the general layered design, I guess. It will be split into smaller focused diagrams later.
Direct fast access via OleDB, ODBC, ZDBC, Oracle (OCI) or SQLite3 (statically linked or via external dll);
Thin wrapper around any DB.pas / TDataset based components (e.g. NexusDB, DBExpress, FireDAC, AnyDAC, UniDAC, BDE...);
Generic abstract OOP layout, with a restricted set of data types, but able to work with any SQL-based database engine;
Tested with MS SQL Server 2008/2012, Firebird 2.5.1, PostgreSQL 9.2/9.3, MySQL 5.6, IBM DB2 10.5, Oracle 11g, and the latest SQLite3 engine;
Could access any local or remote Database, from any edition of Delphi (even Delphi 7 personal, the Turbo Explorer or Starter edition), just for free (in fact, it does not use the DB.pas standard unit and all its dependencies);
Unicode, even with pre-Unicode version of Delphi (like Delphi 7 or 2007), since it uses internally UTF-8 encoding;
Handle NULL or BLOB content for parameters and results, including stored procedures;
Avoid most memory copy or unnecessary allocation: we tried to access the data directly from the retrieved data buffer, just as given from OleDB / ODBC or the low-level database client (e.g. OCI for Oracle, or the SQLite3 engine);
Designed to achieve the best possible performance on 32-bit or 64-bit Windows: most time is spent in the database provider (OleDB, ODBC, OCI, SQLite3) - the code layer added to the database client is very thin and optimized;
Could be safely used in a multi-threaded application/server (with dedicated thread-safe methods, usable even if the database client is not officially multi-thread);
Allow parameter bindings of prepared requests, with fast access to any parameter or column name (thanks to TDynArrayHashed);
Column values accessible with most Delphi types, including Variant or generic string / WideString;
Available ISQLDBRows interface - to avoid typing try...finally Query.Free end; and allow one-line SQL statement;
Late-binding column access, via a custom variant type when accessing the result sets;
two kind of optimized TDataSet result sets: one read-write based on TClientDataSet, and a much faster read-only TSynSQLStatementDataSet
Direct UTF-8 JSON content creation, with no temporary data copy nor allocation (this feature will be the most used in our JSON-based ORM server);
High-level catalog / database layout abstract methods, able to retrieve the table and column properties (including indexes), for database reverse-engineering; provide also SQL statements to create a table or an index in a database-abstract manner; those features will be used directly by our ORM;
Designed to be used with our ORM, but could be used stand-alone (a full Delphi 7 client executable is just about 200 KB), or even in any existing Delphi application, thanks to a TQuery-like wrapper;
TQueryemulation class, for direct re-use with existing code, in replacement to DB.pas based code (including the deprecated BDE technology), with huge speed improvement for result sets (since we bypass the slow TDataSet component);
Fast and safe remote access over HTTP to any SynDB engine, without the need to deploy the RDBMS client library with the application;
Free SynDBExplorer tool provided, which is a small but efficient way of running queries in a simple User Interface, on all supported engines, and publish as server or consume as client SynDB remote access over HTTP - see below; it is also a good sample program of a stand-alone usage of those libraries.
8.1.2. Data types
Of course, our ORM does not need a whole feature set (do not expect to use this database classes with your VCL DB RAD components), but handles directly the basic SQL column types, as needed by our ORM (derived from SQLite's internal column types): NULL, Int64, Double, Currency, DateTime, RawUTF8 and BLOB.
Those types will map low-level database-level access types, not high-level Delphi types as TSQLFieldType defined in mORMot.pas, or the generic huge TFieldType as defined in the standard VCL DB.pas unit. In fact, it is more tied to the standard SQLite3 generic types, i.e. NULL, INTEGER, REAL, TEXT, BLOB (with the addition of a ftCurrency and ftDate type, for better support of most DB engines) see http://www.sqlite.org/datatype3.html
You can note that the only string type handled here uses UTF-8 encoding (implemented using our RawUTF8 type), for cross-Delphi true Unicode process. Code can access to the textual data via variant, string or widestring variables and parameters, but our units will use UTF-8 encoding internally - see Unicode and UTF-8. It will therefore interface directly with our ORM, which uses the same encoding. Of course, if the column was not defined as Unicode text in the database, any needed conversion to/from the corresponding charset will take place at the data provider level; but in your user code, you will have always access to the Unicode content.
BLOB columns or parameters are accessed as RawByteString variables, which may be mapped to a standard TStream via our TRawByteStringStream.
8.1.3. Database types
In addition to raw data access, the SynDB.pas unit handles some SQL-level generation, which will be used by our Object-Relational Mapping kernel.
The following RDBMS database engines are defined as such in SynDB.pas:
SQLite3 3.7.11 and up (we supply the latest version for static linking)
dFirebird
Firebird 2.5.1
dNexusDB
NexusDB 3.11
dPostgreSQL
PostgreSQL 9.2/9.3
dDB2
IBM DB2 10.5
dInformix
IBM Informix 11.70
The above versions have been tested, but newer or older revisions may also work. Your feedback is welcome: we cannot achieve to test all possible combinations of databases and clients on our own!
The SynDB.pas unit is able to generate the SQL statements of those engines, for a CREATE TABLE / CREATE INDEX command, retrieve metadata (e.g. the tables and fields information), compute the right limit/offset syntax for a SELECT, compute multi-INSERT statements - see below, check the SQL keywords, define specific schema/owner naming conventions, process date and time values, handle errors and exceptions, or even create a database.
8.1.4. SynDB Units
Here are the units implementing the external database-agnostic features:
It is worth noting that those units only depend on SynCommons.pas, therefore are independent of the ORM part of our framework (even the remote access). They may be used separately, accessing all those external databases with regular SQL code. Since all their classes inherit from abstract classes defined in SynDB.pas, switching from one database engine to another (even a remote HTTP access) is just a matter of changing one class type.
8.1.5. SynDB Classes
The data is accessed via three families of classes:
Connection properties, which store the database high-level properties (like database implementation classes, server and database name, user name and password);
Connections, which implements an actual connection to a remote database, according to the specified Connection properties - of course, there can be multiple connections for the same connection properties instance;
Statements, which are individual SQL queries or requests, which may be multiple for one existing connection.
Here is the general class hierarchy, for all available remote connection properties:
TSQLDBConnectionProperties classes hierarchyThose classes are the root classes of the SynDB.pas units, by which most of your database process will be implemented. For instance, the mORMot framework ORM only needs a given TSQLDBConnectionProperties instance to access any external database.
Then the following connection classes are defined:
TSQLDBSQLite3Connection classes hierarchyEach connection may create a corresponding statement instance:
TOleDBStatement classes hierarchyIn the above hierarchy, TSQLDBDatasetStatementAbstract is used to allow the use of custom classes for parameter process, e.g. TADParams for FireDAC (which features Array DML).
Some dedicated Exception classes are also defined:
ESQLQueryException classes hierarchyCheck the TestOleDB.dpr sample program, located in SQlite3 folder, using our SynOleDB unit to connect to a local MS SQL Server 2008 R2 Express edition, which will write a file with the JSON representation of the Person.Address table of the sample database AdventureWorks2008R2.
8.1.6. ISQLDBRows interface
The easiest is to stay at the TSQLDBConnectionProperties level, using the Execute() methods of this instance, and access any returned data via an ISQLDBRows interface. It will automatically use a thread-safe connection to the database, in an abstracted way.
Depending on the TSQLDBConnectionProperties sub-class, input parameters do vary. Please refer to the documentation of each Create() constructor to set all parameters as expected.
Then any sub-code is able to execute any SQL request, with optional bound parameters, as such:
procedure UseProps(Props: TSQLDBConnectionProperties);
var I: ISQLDBRows;
begin
I := Props.Execute('select * from Sales.Customer where AccountNumber like ?',['AW000001%']);
while I.Step do
assert(Copy(I['AccountNumber'],1,8)='AW000001');
end;
In this procedure, no TSQLDBStatement is defined, and there is no need to add a try ... finally Query.Free; end; block.
In fact, the MyConnProps.Execute method returns a TSQLDBStatement instance as a ISQLDBRows, which methods can be used to loop for each result row, and retrieve individual column values. In the code above, I['FirstName'] will in fact call the I.Column[] default property, which will return the column value as a variant. You have other dedicated methods, like ColumnUTF8 or ColumnInt, able to retrieve directly the expected data.
Note that all bound parameters will appear within the SQL statement, when logged using our TSynLog classes - see below.
8.1.7. Using properly the ISQLDBRows interface
You may have noticed in the previous code sample, that we used a UseProps() sub-procedure. This was made on purpose.
We may have written our little test as such:
var Props: TSQLDBConnectionProperties;
I: ISQLDBRows;
...
Props := TOleDBMSSQLConnectionProperties.Create('.\\SQLEXPRESS','AdventureWorks2008R2','','');
tryI := Props.Execute('select * from Sales.Customer where AccountNumber like ?',['AW000001%']);while I.Step doassert(Copy(I['AccountNumber'],1,8)='AW000001');finally
Props.Free;
end;
end;
In fact, you should not use this pattern. This code will lead to an unexpected access violation at runtime.
Behind the scene, as will be detailed below, the compiler is generating some hidden code to finalize the I: ISQLDBRows local variable, as such:
...
finally
Props.Free;
end;
I := nil; // this is generated by the compiler, just before the final "end;"end;
So ISQLDBRows is released after the Props instance, and an access violation occurs.
The correct way to write is either to use a sub-function (which will release the local ISQLDBRows when the function leaves), or explicitely release the interface variable:
while I.Step do
assert(Copy(I['AccountNumber'],1,8)='AW000001');
finallyI := nil; // release local variable
Props.Free;
end;
Of course, most of the time you will initialize your TSQLDBConnectionProperties globally for your process, then release it when it ends. Each request will take place in its own sub-method, so will be released before the main TSQLDBConnectionProperties instance if freed.
Last but not least, it is worth writing that you should not create a TSQLDBConnectionProperties instance each time you need to access the database, since you will probably lose most of the SynDB features, like a per-thread connection pool, or statement cache.
8.1.8. Late-binding
We implemented late-binding access of column values, via a custom variant time. It uses the internal mechanism used for Ole Automation, here to access column content as if column names where native object properties.
The resulting Delphi code to write is just clear and obvious:
procedure UseProps(Props: TSQLDBConnectionProperties);
var Row: Variant;
beginwith Props.Execute('select * from Sales.Customer where AccountNumber like ?',
['AW000001%'],@Row) dowhile Step do
assert(Copy(Row.AccountNumber,1,8)='AW000001');
end;
Note that Props.Execute returns an ISQLDBRows interface, so the code above will initialize (or reuse an existing) thread-safe connection (OleDB uses a per-thread model), initialize a statement, execute it, access the rows via the Step method and the Row variant, retrieving the column value via a direct Row.AccountNumber statement.
The above code is perfectly safe, and all memory will be released with the reference count garbage-collector feature of the ISQLDBRows interface. You are not required to add any try..finally Free; end statements in your code.
This is the magic of late-binding in Delphi. Note that a similar feature is available for our SynBigTable unit.
In practice, this code is slower than using a standard property based access, like this:
while Step do
assert(Copy(ColumnUTF8('AccountNumber'),1,8)='AW000001');
But the first version, using late-binding of column name, just sounds more natural.
Of course, since it is late-binding, we are not able to let the compiler check at compile time for the column name. If the column name in the source code is wrong, an error will be triggered at runtime only.
First of all, let's see the fastest way of accessing the row content.
In all cases, using the textual version of the column name ('AccountNumber') is slower than using directly the column index. Even if our SynDB.pas library uses a fast lookup using hashing, the following code will always be faster:
var Customer: Integer;
beginwith Props.Execute(
'select * from Sales.Customer where AccountNumber like ?',
['AW000001%'],@Customer) dobegin
Customer := ColumnIndex('AccountNumber');
while Step do
assert(Copy(ColumnString(Customer),1,8)='AW000001');
end;
end;
But to be honest, after profiling, most of the time is spend in the Step method, especially in fRowSet.GetData. In practice, I was not able to notice any speed increase worth mentioning, with the code above.
Our name lookup via a hashing function (i.e. TDynArrayHashed) just does its purpose very well.
On the contrary the Ole-Automation based late-binding was found out to be slower, after profiling. In fact, the Row.AccountNumber expression calls an hidden DispInvoke function, which is slow when called multiple times. Our SynCommons.pas unit is able to hack the VCL, and by patching the VCL code in-memory, will call an optimized version of this function. Resulting speed is very close to direct Column['AccountNumber'] call. See SDD # DI-2.2.3.
8.1.9. TDataset and SynDB
Since our SynDB.pas unit does not rely on the Delphi's DB.pas unit, its result sets do not inherit from the TDataset.
As a benefit, those result sets will be much faster, when accessed from your object code. But as a drawback, you won't be able to use them in your regular VCL applications.
In order to easily use the SynDB.pas unit with VCL components, you can create TDataSet results sets from any SynDB query. You have access to two kind of optimized TDataSet result sets:
See sample "17 - TClientDataset use" to find out more about using such TDataSet, including some speed information. You need to have run the TestSQL3.dpr set of regression tests before, to have the expected SQlite3 data file.
8.1.10. TQuery emulation class
The SynDB.pas unit offers a TQuery-like class. This class emulates regular TQuery classes, without inheriting from DB.pas nor its slow TDataSet.
It mimics basic TQuery VCL methods, with the following benefits:
Does not inherit from TDataset, but has its own light implementation over SynDB.pasISQLDBStatement result sets, so is usually much faster;
Will be also faster for field and parameters access by name - or even index;
Is Unicode-ready, even with older pre-Unicode version of Delphi, able to return the data as WideString, independently from the current system charset;
Of course, since it is not a TDataSet component, you can not use it directly as a regular replacement for your RAD code. But if your application is data-centric and tried to encapsulate its business logic with some classes - i.e. if it tried to properly implement OOP, not RAD - you can still replace directly your existing code with the TQuery emulator:
You should better use TSQLDBStatement instead of this wrapper, but having such code-compatible TQuery replacement could make easier some existing code upgrade, especially for Legacy code and existing projects. For instance, it will help to avoid deploying the deprecated BDE, generate (much) smaller executable, access any database without paying a big fee, avoid rewriting a lot of existing code lines of a big legacy application, or let your old application communicate with the database over plain HTTP, without the need to install any RDBMS client - see below.
8.1.11. Storing connection properties as JSON
You can use a TSynConnectionDefinition storage to persist the connection properties as a JSON content, in memory or file.
The password will be encrypted and encoded as Base64 in the file, for safety. You could use TSynConnectionDefinition's Password and PasswordPlain properties to compute the value to be written on disk.
See also TSQLRest.CreateFrom() for a similar feature at ORM/REST level, and function TSQLRestCreateFrom( aDefinition: TSynConnectionDefinition ) as defined in mORMotDB.pas which is able to create a regular local ORM if aDefinition.Kind is a TSQLRest class name, but also an ORM with external DB storage - see below - if aDefinition.Kind is a TSQLDBConnectionProperties class name.
8.2. SynDB clients
From the SynDB.pas logical point of view, here is how databases can be accessed:
SynDB First Level ProvidersOf course, the physical implementation is more complicated, as was stated in SynDB Architecture.
We will now detail how these available database connections are interfaced as SynDB.pas classes.
8.2.1. OleDB or ODBC to rule them all
OleDB (Object Linking and Embedding, Database, sometimes written as OLE DB or OLE-DB) is an API designed by Microsoft for accessing data from a variety of sources in a uniform manner.
SynDB and OleDBOf course, you have got the Microsoft SQL Native Client to access the MS SQL Server 2005/2008/2012, but Oracle provides a native OleDB provider (even if we found out that this Oracle provider, including the Microsoft's version, have problems with BLOBs). Do not forget about the Advantage Sybase OleDB driver and such...
If you plan to connect to a MS SQL Server, we highly recommend using the TOleDBMSSQL2012ConnectionProperties class, corresponding to SQLNCLI11, part of the Microsoft® SQL Server® 2012 Native Client, it is able to connect to any revision of MS SQL Server (event MS SQL Server 2008), and was found to be the more stable. You can get it from http://www.microsoft.com/en-us/download/details.aspx?id=29065 by downloading the sqlncli.msi corresponding to your Operating System. Most of the time, you should download the X64 Package of sqlncli.msi, which will also install the 32-bit version of SQL Server Native Client, so will work for a 32-bit Delphi executable - the X86 Package is for a 32-bit Windows system only.
ODBC (Open DataBase Connectivity) is a standard C programming language middle-ware API for accessing database management systems (DBMS). ODBC was originally developed by Microsoft during the early 1990s, then was deprecated in favor to OleDB. More recently, Microsoft is officially deprecating OleDB, and urge all developers to switch to the open and cross-platform ODBC API for native connection. Back & worse strategy from Micro$oft... one more time! http://blogs.msdn.com/b/sqlnativeclient/archive/2011/08/29/microsoft-is-aligning-with-odbc-for-native-relational-data-access.aspx
SynDB and ODBCBy using our own OleDB and ODBC implementations, we will for instance be able to convert directly the OleDB or ODBC binary rows to JSON, with no temporary conversion into the Delphi high-level types (like temporary string or variant allocations). The resulting performance is much higher than using standard TDataSet or other components, since we will bypass most of the layers introduced by BDE/dbExpress/FireDAC/AnyDAC component sets.
Most OleDB / ODBC providers are free (even maintained by the database owner), others will need a paid license.
It is worth saying that, when used in a mORMot Client-Server architecture, object persistence using an OleDB or ODBC remote access expects only the database instance to be reachable on the Server side. Clients could communicate via standard HTTP, so won't need any specific port forwarding or other IT configuration to work as expected.
8.2.2. ZEOS via direct ZDBC
8.2.2.1. The mORMot's best friend
ZeosLib, aka Zeos, is a Open Source library which provides native access to many database systems, developed for Delphi, Kylix and Lazarus / FreePascal. It is fully object-oriented and with a totally modular design. It connects to the databases by wrapping their native client libraries, and makes them accessible via its abstract layer, named ZDBC. Originally, ZDBC was a port of JDBC 2.0 (Java Database Connectivity API) to Object Pascal. Since that time the API was slightly extended but the main ideas remain unchanged, so official JDBC 2.0 specification is the main entry point to the ZDBC API.
The latest 7.x branch was deeply re-factored, and new methods and performance optimization were introduced. In fact, we worked hand by hand with Michael (the main contributor of ZeosLib) to ensure that the maximum performance is achieved. The result is an impressive synergy of mORMot and ZeosLib, for both reading or writing data.
Since revision 1.18 of the framework, we included direct integration of ZeosLib into mORMot persistence layer, with direct access to the ZDBC layer. That is, our SynDBZeos unit does not reference DB.pas, but access directly to the ZDBC interfaces.
SynDB and Zeos / ZDBCSuch direct access, by-passing the VCL DB.pas layer and its TDataSet bottleneck, is very close to our SynDB.pas design. As such, ZeosLib is a first class citizen library for mORMot. The SynDBZeos unit is intended to be a privileged access point to external SQL databases.
8.2.2.2. Recommended version
We recommend that you download the 7.2 branch of Zeos/ZDBC, which is the current trunk, at the time of this writing.
A deep code refactoring has been made by the Zeos/ZDBC authors (thanks a lot Michael, aka EgonHugeist!), even taking care of mORMot expectations, to provide the best performance and integration, e.g. for UTF-8 content processing. In comparison with the previous 7.1 release, speed increase can be of more than 10 times, depending on the database back-end and use case!
When writing data (i.e. Add/Update/Delete operations), Array binding suport has been added to the Zeos/ZDBC 7.2 branch, and our SynDBZeos unit will use it if available, detecting if IZDatabaseInfo.SupportsArrayBindings property is true - which will be the case for Oracle and FireBird providers by now. Our ORM benefits from it, when processing in BATCH mode, even letting ZDBC creates the optimized SQL - see below. Performance at reading is very high, much higher than any other DB.pas based library, in case of single record retrieval. For instance, TSQLDBZEOSStatement.ColumnsToJSON() will avoid most temporary memory allocation, and is able to create the JSON directly from the low-level ZDBC binary buffers.
If you need to stick to a version prior to 7.2, and want to work as expected with a SQlite3 back-end (but you should'nt have any reason to do so, since Zeos will be slower compared to SynDBSQlite3), you need to apply some patches for Zeos < 7.2, in methods TZSQLiteCAPIPreparedStatement. ExecuteQueryPrepared() and TZSQLiteResultSet. FreeHandle, as stated as comment at the beginning of SynDBZeos.pas.
8.2.2.3. Connection samples
If you want e.g. to connect to MySQL via Zeos/ZDBC, follow those steps:
Download "Windows (x86, 32-bit), ZIP Archive" from http://dev.mysql.com/downloads/connector/c - then extract the archive: only libmysql.dll is needed and should be placed either in the executable folder, either in the system PATH;
See TSQLDBZEOSConnectionProperties documentation for further information about the expected syntax, and available abilities of this great open source library.
8.2.3. Oracle via OCI
For our framework, and in completion to SynDBZeos or our SynOleDB / SynDBODBC units, the SynDBOracle unit has been implemented. It allows direct access to any remote Oracle server, using the Oracle Call Interface.
Oracle Call Interface (OCI) is the most comprehensive, high performance, native unmanaged interface to the Oracle Database that exposes the full power of the Oracle Database. A direct interface to the oci.dll library was written, using our DB abstraction classes introduced in SynDB.pas.
We tried to implement all best-practice patterns detailed in the official Building High Performance Drivers for Oracle reference document.
Resulting speed is quite impressive: for all requests, SynDBOracle is 3 to 5 times faster than a SynOleDB connection using the native OleDB Provider supplied by Oracle. A similar (even worse) speed penalty has been observed in comparison with the official ODBC driver from Oracle, via a SynDBODBC-based connection. For more detailed numbers, see Data access benchmark.
8.2.3.1. Optimized client library
It is worth saying that, when used in a mORMot Client-Server architecture, object persistence using an Oracle database expects only the Oracle instance to be reachable on the Server side, just like with OleDB or ODBC.
Here are the main features of this SynDBOracle unit:
Direct access to the Oracle Call Interface (OCI) client, with no BDE, Midas, DBExpress, nor OleDB / ODBC provider necessary;
Dedicated to work with any version of the Oracle OCI interface, starting from revision 8;
Optimized for the latest features of Oracle 11g/12c (e.g. using native Int64 for retrieving NUMBER fields with no decimal);
Able to work with the Oracle Instant Client for No Setup applications (installation via file/folder copy);
Natively Unicode (uses internal UTF-8 encoding), for all version of Delphi, with special handling of each database char-set;
Tried to achieve best performance available from every version of the Oracle client;
Designed to work under any version of Windows, either in 32 or 64-bit architecture (but the OCI library must be installed in the same version than the compiled Delphi application, i.e. only 32-bit for this current version);
Late-binding access to column names, using a new dedicated Variant type (similar to Ole Automation runtime properties);
Connections are multi-thread ready with low memory and CPU resource overhead;
Can use connection strings like '//host[:port]/[service_name]', avoiding use of the TNSNAME.ORA file;
Use Rows Array and BLOB fetching, for best performance (ZEOS/ZDBC did not handle this, for instance);
Handle Prepared Statements - on both client and server side, if available - server side caching lead to up a 3 times speed boost, from our experiment;
Implements Array Binding for very fast bulk modifications - insert, update or deletion of a lot of rows at once;
Cursor support, which is pretty common when working with stored procedures and legacy code.
Of course, this unit is perfectly integrated with the External SQL database access process. For instance, it features native export to JSON methods, which will be the main entry point for our ORM framework. And Array binding is handled directly during BATCH sequences - see below.
8.2.3.2. Direct connection without Client installation
You can use the latest version of the Oracle Instant Client (OIC) provided by Oracle - see http://www.oracle.com/technetwork/database/features/instant-client - which allows to run client applications without installing the standard (huge) Oracle client or having an ORACLE_HOME.
Oracle Connectivity with SynDBOracleJust deliver the few dll files in the same directory than the application (probably a mORMot server), and it will work at amazing speed, with all features of Oracle (other stand-alone direct Oracle access library rely on deprecated Oracle 8 protocol).
8.2.3.3. Oracle Wallet support
Password credentials for connecting to databases can now be stored in a client-side Oracle Wallet, a secure software container used to store authentication and signing credentials.
This wallet usage can simplify large-scale deployments that rely on password credentials for connecting to databases. When this feature is configured, application code, batch jobs, and scripts no longer need embedded user names and passwords. Risk is reduced because such passwords are no longer exposed in the clear, and password management policies are more easily enforced without changing application code whenever user names or passwords change.
Wallet configuration is performed on the computer where server is running. You must perform a full Oracle client setup: OIC - see Direct connection without Client installation - does not give access to wallet authentication.
Steps to create a Wallet:
1) Create a folder for you wallet:
> mkdir c:\OraWallets
2) Create a wallet on the client by using the following syntax at the command line:
> mkstore -wrl c:\OraWallets -create
Oracle will ask you for the main wallet password - remember it!
3) Create database connection credentials in the wallet by using the following syntax at the command line:
where password is the password of database user. Oracle will ask you the wallet password - use the main password from previous step.
4) In the client sqlnet.ora file, add the WALLET_LOCATION parameter and set it to the directory location of the wallet and set SQLNET.WALLET_OVERRIDE parameter to TRUE:
For our ORM framework, we implemented an efficient SQLite3 wrapper, joining the SQLite3 engine either statically (i.e. within the main exe) or from external sqlite3.dll.
It was an easy task to let the SynSQLite3.pas unit be called from our SynDB.pas database abstract classes. Adding such another Database is just a very thin layer, implemented in the SynDBSQLite3.pas unit.
If you want to link the SQLite3 engine to your project executable, ensure you defined the SynSQLite3Static.pas unit in your uses clause. Otherwise, define a TSQLite3LibraryDynamic instance to load an external sqlite3.dll library:
To create a connection property to an existing SQLite3 database file, call the TSQLDBSQLite3ConnectionProperties. Create constructor, with the actual SQLite3 database file as ServerName parameter, and (optionally the proprietary encryption password in Password - available since rev. 1.16); others (DataBaseName, UserID) are just ignored.
These classes will implement an internal statement cache, just as the one used for TSQLRestServerDB. In practice, using the cache can make process up to two times faster (when processing small requests).
When used within the mORMot ORM, you have therefore two ways of accessing the SQLite3 engine:
Either directly from the ORM core;
Either virtually, as external tables.
SynDB, mORMot and SQLite3
If your mORMot-based application purpose is to only use one centralized SQLite3 database, it does not make sense to use SynDBSQLite3 external tables. But if you want, in the future, to be able to connect to any external database, or to split your data in several database files, using those external SQLite3 tables do make sense. Of course, the SQlite3 engine library itself will be shared with both internal and external process.
8.2.5. DB.pas libraries
Since revision 1.18 of the framework, a new SynDBDataset.pas unit has been introduced, able to interface any DB.pas based library to our SynDB.pas classes, using TDataset to retrieve the results. Due to the TDataset design, performance is somewhat degraded in respect to direct SynDB.pas connection (e.g. results for SQLite3 or Oracle), but it also opens the potential database access.
Some dedicated providers have been published in the SynDBDataset sub-folder of the mORMot source code repository. Up to now, FireDAC (formerly AnyDAC), UniDAC and BDE libraries are interfaced, and a direct connection to the NexusDB engine is available.
Since there are a lot of potential combinations here - see SynDB Architecture - feedback is welcome. Due to our Agile process, we will first stick to the providers we need and use. It is up to mORMot users to ask for additional features, and provide wrappers, if possible, or at least testing abilities. Of course, DBExpress will benefit to be integrated, even if Embarcadero just acquired AnyDAC and revamped/renamed it as FireDAC - to make it the new official platform.
8.2.5.1. NexusDB access
NexusDB is a "royalty-free, SQL:2003 core compliant, Client/Server and Embedded database system, with features that rival other heavily licensed products" (vendor's terms).
FireDAC is an unique set of Universal Data Access Components for developing cross platform database applications on Delphi. This was in fact a third-party component set, bought by Embarcadero to DA-SOFT Technologies (formerly known as AnyDAC), and included with several editions of Delphi XE3 and up. This is the new official platform for high-speed database development in Delphi, in favor to the now deprecated DBExpress.
SynDB and FireDAC / AnyDACOur integration within SynDB.pas units and the mORMot persistence layer has been tuned. For instance, you can have direct access to high-speed FireDAC Array DML feature, via the ORM batch process, via so-called array binding - see below.
8.2.5.3. UniDAC library
Universal Data Access Components (UniDAC) is a cross-platform library of components that provides direct access to multiple databases from Delphi. See http://www.devart.com/unidac
SynDB and UniDACFor instance, to access to a MySQL remote database, you should be able to connect using:
This library gives pretty stable results, but lack of the array binding feature, in comparison to FireDAC.
8.2.5.4. BDE engine
Borland Database Engine (BDE) is the Windows-based core database engine and connectivity software shipped with earlier versions of Delphi. Even if it is deprecated, and replaced by DBExpress since 2000, it is a working solution, easy to interface as a SynDB.pas provider.
SynDB and BDEPlease do not use the BDE on any new project! You should better switch to another access layer.
8.2.6. Remote access via HTTP
The SynDBRemote.pas unit allows you to create database applications that perform SQL operations on a remote HTTP server, instead of a database server. You can create connections just like any other SynDB.pas database, but the transmission will take place over HTTP. As a result, no database client is to be deployed on the end user application: it will just use HTTP requests, even over Internet. You can use all the features of SynDB.pas classes, with the ease of one optimized HTTP connection.
SynDB Remote access OverviewThis feature is not part of our RESTful ORM, so does not use the mORMot.pas unit, but its own optimized protocol, using enhanced security (transmission encryption with user authentication and optional HTTPS) and automatic data compression. Only the HTTP client and server classes, from the SynCrtSock.pas unit, are used.
Since your application can use both TDataSet - see TDataset and SynDB - and emulated TQuery - see TQuery emulation class, this new mean of transmission may make it easy to convert existing Delphi client-server applications into Multi-tier architecture with minimal changes in source code. Then, for your new code, you may switch to a SOA / ORM design, using mORMot's RESTful abilities - see below.
The transmission protocol uses an optimized binary format, which is compressed, encrypted and digitally signed on both ends, and the remote user authentication will be performed via a challenge validation scheme. You can also publish your server over HTTPS, if needed, in http.sys kernel mode.
8.2.6.1. Server and Client classes
To publish your SynDB.pas connection, you just need to initialize one of the TSQLDBServer* classes defined in SynDBRemote.pas:
SynDB Remote access Server classes hierarchyYou can define either a HTTP server based on the socket API - TSQLDBServerSockets - or the more stable and fast TSQLDBServerHttpApi class (under Windows only), which uses the http.sys kernel mode HTTP server available since Windows XP - see below.
For the client side, you could use one of the following classes also defined in SynDBRemote.pas:
SynDB Remote access Client classes hierarchyNote that TSQLDBHttpRequestConnectionProperties is an abstract parent class, so you should not instantiate it directly, but one of its inherited implementations.
As you can see, you may choose between a pure socket API client, others using WinINet or WinHTTP (under Windows), or the libcurl API (especially on Linux). The TSQLDBWinHTTPConnectionProperties class is the more stable over the Internet on Windows, even if plain sockets tend to give better numbers on localhost as stated by our Data access benchmark. Please read below for a comparison of the diverse APIs.
The above code will initialize a connection to a local data.db3SQlite3 database (in the Props variable), and then publish it using the http.sys kernel mode HTTP server to the http://1.2.3.4:8092/syndbremote URI - if the server's IP is 1.2.3.4.
A first user is defined, with 'user' / 'pass' credentials. Note that in our remote access, user management does not match the RDBMS user rights: you should better have your own set of users at application level, for higher security, and a better integration with your business logic. If creating a new user on a RDBMS could be painful, managing remote user authentication is pretty easy on the SynDBRemote.pas side, by using the Protocol.Authenticate property of the server:
You could also share mORMot's REST authentication users below, by replacing the default TSynAuthentication class instance with TSynAuthenticationRest, as defined in mORMot.pas. Note using at the same time SynDBRemote and mORMot's ORM/SOA sounds like a weak design, but may have its benefits when dealing with legacy code, and a lot of existing SQL statements.
The URI should be registered to work as expected, just as expected by the http.sys API - see below. You may either run the server once with the system Administrator rights, or call the following method (as we do in TestSQL3Register.dpr) in your setup application:
As you can see, there is no link to SynDBSQLite3.pas nor SynSQLite3Static.pas on the client side. Just the HTTP link is needed. No need to deploy the RDBMS client libraries with your application, nor setup the local network firewall.
We defined here a single user, with 'user' / 'pass' credentials, but you may manage more users on the server side, using the Protocol.Authenticate property of TSQLDBServerAbstract.
Then, you execute your favorite SQL using the connection just as usual:
procedure Test(Props: TSQLDBConnectionProperties);
var Stmt: ISQLDBRows;
begin
Stmt := Props.Execute('select * from People where YearOfDeath=?',[1519]);
while Stmt.Step dobegin
assert(Stmt.ColumnInt('ID')>0);
assert(Stmt.ColumnInt('YearOfDeath')=1519);
end;
end;
Or you may use it with VCL components, using the SynDBVCL.pas unit:
The TSynSQLStatementDataSet result set will map directly the raw binary data returned by the TSQLDBServer* class, avoiding any slow data marshalling in your client application, even for huge content. Note that all the whole data is computed and sent by the server: even if you display only the first rows in your TDBGrid, all the data has been transmitted. In fact, partial retrieval works well on a local network, but is not a good idea over the Internet, due to its much higher ping. So consider adding some filter fields, or some application-level paging, to reduce the number of rows retrieved from the SynDBRemote server.
If you defined you own TSynAuthentication class on the server class (e.g. to use REST users and groups via TSynAuthenticationRest), you should create you own class, and override the following method:
This overriden method will inherit from TSQLDBWinHTTPConnectionProperties all its behavior, but use the ORM/SOA authentication scheme for validating its users on the server side.
8.2.6.4. Advanced use cases
You may use this remote connection feature e.g. to mutate a stand-alone shared SQLite3 database into a high performance but low maintenance client-server database engine. You may create it as such on the server side:
You could share an existing SQlite3 database instance (e.g. a TSQLRestServerDB used for our RESTful ORM - see Database layer) by creating the properties as such:
If you use the http.sys kernel-mode server, you could share the same IP port between regular ORM/SOA operations (which may be 80 for a "pure" HTTP server), and remote SynDB access, if the database name (i.e. here 'syndbremote') does not conflict with a ORM table nor a method service.
Note that you can also customize the transmission protocol by setting your own TSQLDBProxyConnectionProtocol class on both server and server sides.
8.2.6.5. Integration with SynDBExplorer
Our SynDBExplorer tool is able to publish in one click any SynDB connection as a HTTP server, or connect to it via HTTP. It could be very handy, even for debugging purposes.
To serve an existing database, just connect to it as usual. Then click on the "HTTP Server" button below the table lists (on left side). You can tune the server properties (HTTP port, database name used for URI, user credentials), then click on the "Start" button.
To connect to this remote connection, run another instance of SynDBExplorer. Create a new connection, using "Remote HTTP" as connection type, and set the other options with the values matching the server side, e.g. with the default "localhost:8092" (replacing localhost with the server IP for an access over the network) for the server name, "syndbremote" for the database name, and "synopse" for both user name and password.
You will be able to access the main server instance remotely, just as if the database was accessed via a regular client.
If the server side database is SQLite3, you just mutated this local engine into a true client-server database - you may be amazed by the resulting performance.
8.2.6.6. Do not forget the mORMot!
Even if you may be tempted to use such remote access to implement a n-Tier architecture, you should rather use mORMot's Client-Server ORM instead - see below - which offers much better client-server integration - due to the Persistence Ignorance pattern of Domain-Driven Design, a better OOP and SOLID modeling design - see below, and even higher performance than raw SQL operations - see e.g. below or below. Our little mORMot is not an ORM on which we added a data transmission layer: it is a full RESTful system, with a true SOA design.
But for integrating some legacy SQL code into a new architecture, SynDBRemote.pas may have its benefits, used in conjunction with mORMot's higher level features.
Note that for cross-platform clients, mORMot's ORM/SOA patterns are a much better approach: do not put SQL in your mobile application, but use services, so that you will not need to re-validate and re-publish the app to the store after any small fix of your business logic!
Start from scratch, i.e. write your classes and let the ORM create all the database structure, which will reflect directly the object properties - it is also named "code-first";
Use an existing database, and then define in your model how your classes map the existing database structure - this is the "database-first" option.
Our mORMot framework implements both paths, even if, like for other ORMs, code-first sounds like a more straight option.
8.3.2. Code-first ORM
An external record can be defined as such, as expected by mORMot's ORM:
The only difference is this index 40 attribute in the definition of FirstName and LastNamepublished properties: this will define the length (in UTF-16WideChar or UTF-8 bytes) to be used when creating the external field for TEXT column. See above e.g.:
In fact, SQLite3 does not care about textual field length, but almost all other database engines expect a maximum length to be specified when defining a VARCHAR column in a table. If you do not specify any length in your field definition (i.e. if there is no index ??? attribute), the ORM will create a column with an unlimited length (e.g. varchar(max) for MS SQL Server). In this case, code will work, but performance and disk usage may be highly degraded, since access via a CLOB is known to be notably slower. The only exceptions to this performance penalty are SQlite3 and PostgreSQL, for which the size unlimited TEXT columns are as fast to process than varchar(#).
By default, no check will be performed by the ORM to ensure that the field length is compliant with the column size expectation in the external database. You can use TSQLRecordProperties's SetMaxLengthValidatorForTextFields() or SetMaxLengthFilterForTextFields() method to create a validation or filter rule to be performed before sending the data to the external database - see Filtering and Validating.
Here is an extract of the regression test corresponding to external databases:
var RExt: TSQLRecordPeopleExt;
(...)
fProperties := TSQLDBSQLite3ConnectionProperties.Create(SQLITE_MEMORY_DATABASE_NAME,'','','');
VirtualTableExternalRegister(fExternalModel,TSQLRecordPeopleExt,fProperties,'PeopleExternal');
aExternalClient := TSQLRestClientDB.Create(fExternalModel,nil,'testExternal.db3',TSQLRestServerDB);
try
aExternalClient.Server.StaticVirtualTableDirect := StaticVirtualTableDirect;
aExternalClient.Server.CreateMissingTables;
Check(aExternalClient.Server.CreateSQLMultiIndex(
TSQLRecordPeopleExt,['FirstName','LastName'],false));
(...)
Start := aExternalClient.ServerTimestamp;
(...)
aID := aExternalClient.Add(RExt,true);
(...)
aExternalClient.Retrieve(aID,RExt);
(...)
aExternalClient.BatchStart(TSQLRecordPeopleExt);
aExternalClient.BatchAdd(RExt,true);
(...)
Check(aExternalClient.BatchSend(BatchID)=HTTP_SUCCESS);
Check(aExternalClient.TableHasRows(TSQLRecordPeopleExt));
Check(aExternalClient.TableRowCount(TSQLRecordPeopleExt)=n);
(...)
RExt.FillPrepare(aExternalClient,'FirstName=? and LastName=?',
[RInt.FirstName,RInt.LastName]); // query will use index -> fast :)while RExt.FillOne do ...
(...)
Updated := aExternalClient.ServerTimestamp;
(...)
aExternalClient.Update(RExt);
aExternalClient.UnLock(RExt);
(...)
aExternalClient.BatchStart(TSQLRecordPeopleExt);
aExternalClient.BatchUpdate(RExt);
(...)
aExternalClient.BatchSend(BatchIDUpdate);
(...)
aExternalClient.Delete(TSQLRecordPeopleExt,i)
(...)
aExternalClient.BatchStart(TSQLRecordPeopleExt);
aExternalClient.BatchDelete(i);
(...)
aExternalClient.BatchSend(BatchIDUpdate);
(...)
for i := 1 to BatchID[high(BatchID)] dobegin
RExt.fLastChange := 0;
RExt.CreatedAt := 0;
RExt.YearOfBirth := 0;
ok := aExternalClient.Retrieve(i,RExt,false);
Check(ok=(i and 127<>0),'deletion');
if ok thenbegin
Check(RExt.CreatedAt>=Start);
Check(RExt.CreatedAt<=Updated);
if i mod 100=0 thenbegin
Check(RExt.YearOfBirth=RExt.YearOfDeath,'Update');
Check(RExt.LastChange>=Updated);
endelsebegin
Check(RExt.YearOfBirth<>RExt.YearOfDeath,'Update');
Check(RExt.LastChange>=Start);
Check(RExt.LastChange<=Updated);
end;
end;
end;
(...)
As you can see, there is no difference with using the local SQLite3 engine or a remote database engine. From the Client point of view, you just call the usual RESTful CRUD methods, i.e. Add() Retrieve() Update() UnLock() Delete() - or their faster Batch*() revision - and you can even handle advanced methods like a FillPrepare with a complex WHERE clause, or CreateSQLMultiIndex / CreateMissingTables on the server side.
Even the creation of the table in the remote database (the 'CREATE TABLE...' SQL statement) is performed by the framework when the CreateMissingTables method is called, with the appropriate column properties according to the database expectations (e.g. a TEXT for SQLite3 will be a NVARCHAR2 field for Oracle).
The resulting table layout on the external database will be the following:
TSQLRecordPeopleExt Code-First Field/Column MappingThe only specific instruction is the global VirtualTableExternalRegister() function, which has to be run on the server side (it does not make any sense to run it on the client side, since for the client there is no difference between any tables - in short, the client do not care about storage; the server does).
Note that the TSQLRecordExternal.LastChange field was defined as a TModTime: in fact, the current date and time will be stored each time the record is updated, i.e. for each aExternalClient.Add or aExternalClient.Update calls. This is tested by both RExt.LastChange>=Start and RExt.LastChange<=Updated checks in the latest loop. The time used is the "server-time", i.e. the current time and date on the server (not on the client), and, in the case of external databases, the time of the remote server (it will execute e.g. a select getdate() under MS SQL to synchronize the date to be inserted for LastChange). In order to retrieve this server-side time stamp, we use Start := aExternalClient.ServerTimestamp instead of the local TimeLogNow function.
A similar feature is tested for the CreatedAt published field, which was defined as TCreateTime: it will be set automatically to the current server time at record creation (and not changed on modifications). This is the purpose of the RExt.CreatedAt<=Updated check in the above code.
8.3.3. Database-first ORM
As we have just seen, the following line initializes the ORM to let TSQLRecordPeopleExt data be accessed via SQL, over an external database connection fProperties:
We also customized the name of the external table, from its default 'PeopleExt' (computed by trimming TSQLRecord prefix from TSQLRecordPeopleExt) into 'PeopleExternal'.
In addition to table name mapping, the ORM is also able to map the TSQLRecord published properties names to any custom database column name. It is in fact very common that most tables on existing databases to not have very explicit column naming, which may sounds pretty weird when mapped directly as TSQLRecord property names. Even the primary keys of your existing database won't match the ORM's requirement of naming it as ID. All this should be setup as expected.
Then you use your TSQLRecordPeopleExt table as usual from Delphi code, with ID and YearOfDeath fields.
But, under the hood, the mORMot ORM will do the mapping when creating all needed SQL statements:
The "internal" TSQLRecord class will be stored within the PeopleExternal external table;
The "internal" TSQLRecord.ID field will be an external "Key: INTEGER" column;
The "internal" TSQLRecord.YearOfDeath field will be an external "YOD: INTEGER" column;
Other internal published properties will be mapped by default with the same name to external column.
The resulting mapping will therefore be the following:
TSQLRecordPeopleExt Database-First Field/Column MappingNote that only the ID and YearOfDeath column names were customized.
Due to the design of SQLite3 virtual tables, and mORMot internals in its current state, the database primary key must be an INTEGER field to be mapped as expected by the ORM. But you can specify any secondary key, e.g. a TEXT field, via stored AS_UNIQUE definition in code.
8.3.4. Sharing the database with legacy code
It is pretty much possible that you will have to maintain and evolve a legacy project, based on an existing database, with a lot of already written SQL statements - see Legacy code and existing projects. For instance, you would like to use mORMot for new features, and/or add mobile or HTML clients - see below. In this case, the ORM advanced features - like ORM Cache or BATCH process, see below - may conflict with the legacy code, for the tables which may have to be shared. Here are some guidelines when working on such a project.
To be exhaustive about this question, we need to consider each ORM CRUD operation. We may have to divide them in three kinds: read queries, insertions, and modifications of existing data.
About ORM read queries, i.e. Retrieve() methods, the ORM cache can be tuned per table, and you will definitively lack of some cache, but remember :
That you can set a "time out" period for this cache, so that you may still benefit of it in most cases;
That you have a cache at server level and another at client level, so you can tune it to be less aggressive on the client, for instance;
That you can tune the ORM cache per ID, so some items which are not likely to change can still be cached.
About ORM insertions, i.e. Add() or BatchAdd() methods, when using the external engine, if any external process is likely to INSERT new rows, ensure you set the TSQLRestStorageExternalEngineAddUseSelectMaxID property to TRUE, so that it will compute the next maximum ID by hand. But it still may be an issue, since the external process may do an INSERT during the ORM insertion. So the best is perhaps to NOT use the ORM Add() or BatchAdd() methods, but rely on dedicated INSERT SQL statement, e.g. hosted in an interface-based service on the server side.
About ORM modifications, i.e. Update() Delete() BatchUpdate() BatchDelete() methods, they sound safe to be used in conjunction with external process modifying the DB, as soon as you use transactions to let the modifications be atomic, and won't conflict any concurrent modifications in the legacy code.
Perhaps the safer pattern, when working with external tables which are to be modified in the background by some legacy code, may be to by-pass those ORM methods, and define server-side interface-based services - see below. Those services may contain manual SQL, instead of using the ORM "magic". But it will depend on your business logic, and you will fail to benefit from the ORM features of the framework. Nevertheless, introducing Service-Oriented Architecture (SOA) into your application will be very beneficial: ORM is not mandatory, especially if you are "fluent" in SQL queries, know how to make them as standard as possible, and have a lot of legacy code, perhaps with already tuned SQL statements.
Introducing SOA is mandatory to interface new kind of clients to your applications, like mobile apps or AJAX modern sites. To be fair, you should not access directly the database any more, as you did with your legacy Delphi application and RAD DB components. All new features, involving new tables to store new data, will still benefit of the mORMot's ORM, and could still be hosted in the very same external database, shared by your existing code. Then, you will be able to identify seams - see Legacy code and existing projects - in your legacy code, and move them to your new mORMot services, then let your application evolve into a newer SOA/MVC architecture, without breaking anything, nor starting from scratch.
8.3.5. Auto-mapping of SQL conflictual field names
If your application is likely to be run on several databases, it may be difficult to handle any potential field name conflict, when you switch from one engine to another. The ORM allows you therefore to ensure that no field name will conflict with a SQL keyword of the underlying database.
In code-first mode, you can use the following method to ensure that no such conflict occurs:
It is a good idea to call the MapAutoKeywordFields method after any manual field mapping for a database-first database, since even your custom field names may conflict with a SQL keyword.
If any field name is likely to conflict with a SQL keyword, it will be mapped with a trailing '_'. For instance, a 'Select' published property will be mapped into a SELECT_ column in the table.
Even if this option is disabled by default, a warning message will appear in the log proposing to use this MapAutoKeywordFields method, and will help you to identify such issues.
8.3.6. External database ORM internals
The mORMotDB.pas unit implements Virtual Tables access for any SynDB.pas-based external database for the framework.
The TSQLDBConnectionProperties instance should be shared by all classes, and released globally when the ORM is no longer needed;
The full table name, as expected by the external database, should be provided here (SQLTableName will be used internally as table name when called via the associated SQLite3 Virtual Table) - if no table name is specified (''), will use SQLTableName (e.g. 'Customer' for a class named TSQLCustomer);
Internal adjustments will be made to convert SQL on the fly from internal ORM representation into the expected external SQL format (e.g. table name or ID property) - see TSQLRestStorage. AdaptSQLForEngineList method.
All the rest of the code will use the "regular" ORM classes, methods and functions, as stated by Object-Relational Mapping.
In order to be stored in an external database, the ORM records can inherit from any TSQLRecord class. Even if this class does not inherit from TSQLRecordVirtualTableAutoID, it will behave as such, once VirtualTableExternalRegister function has been called for the given class.
As with any regular TSQLRecord classes, the ORM core will expect external tables to map an Integer ID published property, auto-incremented at every record insertion. Since not all databases handle such fields - e.g. Oracle - auto-increment will be handled via a select max(id) from tablename statement run at initialization, then computed on the fly via a thread-safe cache of the latest inserted RowID.
You do not have to know where and how the data persistence is stored. The framework will do all the low-level DB work for you. And thanks to the Virtual Table feature of SQlite3, internal and external tables can be mixed within SQL statements. Depending on the implementation needs, classes could be persistent either via the internal SQLite3 engine, or via external databases, just via a call to VirtualTableExternalRegister() before server initialization.
In fact, TSQLVirtualTableCursorExternal will convert any query on the external table into a proper optimized SQL query, according to the indexes existing on the external database. TSQLVirtualTableExternal will also convert individual SQL modification statements (like insert / update / delete) at the SQLite3 level into remote SQL statements to the external database.
Most of the time, all RESTful methods (GET/POST/PUT/DELETE) will be handled directly by the TSQLRestStorageExternal class, and won't use the virtual table mechanism. In practice, most access to the external database will be as fast as direct access, but the virtual table will always be ready to interpret any cross-database complex request or statement.
Direct REST access will be processed as following - when adding an object, for instance:
ORM Access Via REST
Indirect access via virtual tables will be processed as following:
ORM Access Via Virtual TableAbout speed, here is an extract of the test regression log file (see code above, in previous paragraph), which shows the difference between RESTful call and virtual table call, working with more than 11,000 rows of data:
- External via REST: 133,666 assertions passed 409.82ms
- External via virtual table: 133,666 assertions passed 1.12s
The first run is made with TSQLRestServer.StaticVirtualTableDirect set to TRUE (which is the default) - i.e. it will call directly TSQLRestStorageExternal for RESTful commands, and the second will set this property to FALSE - i.e. it will call the SQLite3 engine and let its virtual table mechanism convert it into another SQL calls.
It is worth saying that this test is using an in-memory SQLite3 database (i.e. instantiated via SQLITE_MEMORY_DATABASE_NAME as pseudo-file name) as its external DB, so what we test here is mostly the ORM overhead, not the external database speed. With real file-based or remote databases (like MS SQL), the overhead of remote connection won't make noticeable the use of Virtual Tables.
In all cases, letting the default StaticVirtualTableDirect=true will ensure the best possible performance. As stated by Data access benchmark, using a virtual or direct call won't affect the CRUD operation speed: it will by-pass the virtual engine whenever possible.
8.3.7. Tuning the process
Multi-threading abilities of the server, and all available settings, will be detailed below. By default, all ORM read operations will be run in concurrent mode, and all ORM write operations will be executed in blocking mode. This is expected to be both safe and fast, with our internal SQLite3 engine, or most of the external databases. But you may need to change this default behavior, depending on the external engine you are connected to.
First of all, some database client libraries may not allow transactions to be shared among several threads - for instance MS SQL. Other clients may consume a lot of resources for each connection, or may not have good multi-thread scaling abilities. Some database servers do fork their process for each connected client - for instance PostgreSQL: you may want to reduce the server resources by using only one connection, so only one process on the server. To avoid such problems, you can force all ORM write operations to be executed in a dedicated thread, i.e. by setting amMainThread (which is not very opportune on a server without UI), or even better via amBackgroundThread or a amBackgroundORMSharedThread:
Secondly, especially on a long-running n-Tier mORMot server, you may suffer from broken connection exceptions. For instance, after a night without any activity, the attempts to access to the external database may fail in the morning, since the connection may have been disconnected by the database server in the meanwhile. You can use the TSQLDBConnectionProperties.ConnectionTimeOutMinutes property to specify a maximum period of inactivity after which all connections will be flushed and recreated, to avoid potential broken connections issues. In practice, recreating the connections after a while is safe and won't slow done the process - on the contrary, it may help reducing the consumed resources, and stabilize long running n-Tier servers. ThreadSafeConnection method will check for the last activity on its TSQLDBConnectionProperties instance, and then call ClearConnectionPool to release all active connections if the idle time elapsed was too long.
As a consequence, if you use this ConnectionTimeOutMinutes property, you should ensure that no other connection is still active on the background, otherwise some unexpected issues may occur. For instance, you should ensure that your mORMot ORM server runs all its statements in blocking mode for both read and write:
Here above, safe blocking am*** modes are any mode butamUnlocked, i.e. either amLocked, amBackgroundThread, amBackgroundORMSharedThread or amMainThread.
Remember the diagram introducing mORMot's Database layer:
mORMot Persistence Layer Architecture
The following NoSQL engines can be accessed from mORMot's Object Document Mapping (ODM) abilities:
NoSQL Engine
Description
TObjectList
In memory storage, with JSON or binary disk persistence
MongoDB
#1 NoSQL database engine
We can in fact consider our TSQLRestStorageInMemory instance, and its TObjectList storage, as a NoSQL very fast in-memory engine, written in pure Delphi. See In-Memory "static" process for details about this feature.
MongoDB (from "humongous") is a cross-platform document-oriented database system, and certainly the best known NoSQL database. According to http://db-engines.com in December 2015, MongoDB is at 4th place of the most popular types of database management systems, and at first place for NoSQL database management systems. Our mORMot gives premium access to this database, featuring full NoSQL and Object-Document Mapping (ODM) abilities to the framework.
Integration is made at two levels:
Direct low-level access to the MongoDB server, in the SynMongoDB.pas unit;
Close integration with our ORM (which becomes defacto an ODM), in the mORMotMongoDB.pas unit.
MongoDB eschews the traditional table-based relational database structure in favor of JSON-like documents with dynamic schemas (MongoDB calls the format BSON), which match perfectly mORMot's RESTful approach.
9.1. SynMongoDB client
The SynMongoDB.pas unit features direct optimized access to a MongoDB server.
It gives access to any BSON data, including documents, arrays, and MongoDB's custom types (like ObjectID, dates, binary, regex, Decimal128 or Javascript):
For instance, a TBSONObjectID can be used to create some genuine document identifiers on the client side (MongoDB does not generate the IDs for you: a common way is to generate unique IDs on the client side);
Generation of BSON content from any Delphi types (via TBSONWriter);
Fast in-place parsing of the BSON stream, without any memory allocation (via TBSONElement);
A TBSONVariant custom variant type, to store MongoDB's custom type values;
It features some nice abilities about speed, like BULK insert or delete mode, and explicit Write Concern settings.
At collection level, you can have direct access to the data, with high level structures like TDocVariant/TBSONVariant, with easy-to-read JSON, or low level BSON content. You can also tune most aspects of the client process, e.g. about error handling or write concerns (i.e. how remote data modifications are acknowledged).
9.1.1. Connecting to a server
Here is some sample code, which is able to connect to a MongoDB server, and returns the server time:
var Client: TMongoClient;
DB: TMongoDatabase;
serverTime: TDateTime;
res: variant; // we will return the command result as TDocVariant
errmsg: RawUTF8;
begin
Client := TMongoClient.Create('localhost',27017);
try
DB := Client.Database['mydb'];
writeln('Connecting to ',DB.Name); // will write 'mydb'
errmsg := DB.RunCommand('hostInfo',res); // run a commandif errmsg<>'' then
exit; // quit on any error
serverTime := res.system.currentTime; // direct conversion to TDateTime
writeln('Server time is ',DateTimeToStr(serverTime));
finally
Client.Free; // will release the DB instanceend;
end;
Note that for this low-level command, we used a TDocVariant, and its late-binding abilities.
In fact, if you put your mouse over the res variable during debugging, you will see the following JSON content:
{"system":{"currentTime":"2014-05-06T15:24:25","hostname":"Acer","cpuAddrSize":64,"memSizeMB":3934,"numCores":4,"cpuArch":"x86_64","numaEnabled":false},"os":{"type":"Windows","name":"Microsoft Windows 7","version":"6.1 SP1 (build 7601)"},"extra":{"pageSize":4096},"ok":1}
And we simply access to the server time by writing res.system.currentTime.
Here connection was made anonymously. It will work only if the mongod instance is running on the same computer. Safe remote connection, including user authentication, could be made via the TMongoClient.OpenAuth() method: it supports the latest SCRAM-SHA-1 challenge-response mechanism (supported since MongoDB 3.x), or the deprecated MONGODB-CR (for older versions).
...
Client := TMongoClient.Create('localhost',27017);
try
DB := Client.OpenAuth('mydb','mongouser','mongopwd');
...
We will now explain how to add documents to a given collection.
We assume that we have a DB: TMongoDatabase instance available. Then we will create the documents with a TDocVariant instance, which will be filled via late-binding, and via a doc.Clear pseudo-method used to flush any previous property value:
var Coll: TMongoCollection;
doc: variant;
i: integer;
begin
Coll := DB.CollectionOrCreate[COLL_NAME];
TDocVariant.New(doc);
for i := 1 to 10 dobegin
doc.Clear;
doc.Name := 'Name '+IntToStr(i+1);
doc.Number := i;
Coll.Save(doc);
writeln('Inserted with _id=',doc._id);
end;
end;
Thanks to TDocVariant late-binding abilities, code is pretty easy to understand and maintain.
This code will display the following on the console:
Inserted with _id=5369029E4F901EE8114799D9
Inserted with _id=5369029E4F901EE8114799DA
Inserted with _id=5369029E4F901EE8114799DB
Inserted with _id=5369029E4F901EE8114799DC
Inserted with _id=5369029E4F901EE8114799DD
Inserted with _id=5369029E4F901EE8114799DE
Inserted with _id=5369029E4F901EE8114799DF
Inserted with _id=5369029E4F901EE8114799E0
Inserted with _id=5369029E4F901EE8114799E1
Inserted with _id=5369029E4F901EE8114799E2
It means that the Coll.Save() method was clever enough to understand that the supplied document does not have any _id field, so will compute one on the client side before sending the document data to the MongoDB server.
We may have written:
for i := 1 to 10 dobegin
doc.Clear;
doc._id := ObjectID;
doc.Name := 'Name '+IntToStr(i+1);
doc.Number := i;
Coll.Save(doc);
writeln('Inserted with _id=',doc._id);
end;
end;
Which will compute the document identifier explicitly before calling Coll.Save(). In this case, we may have called directly Coll.Insert(), which is somewhat faster.
Note that you are not obliged to use a MongoDB ObjectID as identifier. You can use any value, if you are sure that it will be genuine. For instance, you can use an integer:
for i := 1 to 10 dobegin
doc.Clear;
doc._id := i;
doc.Name := 'Name '+IntToStr(i+1);
doc.Number := i;
Coll.Insert(doc);
writeln('Inserted with _id=',doc._id);
end;
end;
The console will display now:
Inserted with _id=1
Inserted with _id=2
Inserted with _id=3
Inserted with _id=4
Inserted with _id=5
Inserted with _id=6
Inserted with _id=7
Inserted with _id=8
Inserted with _id=9
Inserted with _id=10
Note that the mORMot ORM will compute a genuine series of integers in a similar way, which will be used as expected by the TSQLRecord.IDprimary key property.
The TMongoCollection class can also write a list of documents, and send them at once to the MongoDB server: this BULK insert mode - close to the Array Binding feature of some SQL providers, and implemented in our SynDB.pas classes - see below - can increase the insertion by a factor of 10 times, even when connected to a local instance: imagine how much time it may save over a physical network!
For instance, you may write:
var docs: TVariantDynArray;
...
SetLength(docs,COLL_COUNT);
for i := 0 to COLL_COUNT-1 dobeginTDocVariant.New(docs[i]);
docs[i]._id := ObjectID; // compute new ObjectID on the client side
docs[i].Name := 'Name '+IntToStr(i+1);
docs[i].FirstName := 'FirstName '+IntToStr(i+COLL_COUNT);
docs[i].Number := i;
end;
Coll.Insert(docs); // insert all values at once
...
You will find out later for some numbers about the speed increase due to such BULK insert.
9.1.3. Retrieving the documents
You can retrieve the document as a TDocVariant instance:
You can retrieve a list of documents, as a dynamic array of TDocVariant:
var docs: TVariantDynArray;
...
Coll.FindDocs(docs);
for i := 0 to high(docs) do
writeln('Name: ',docs[i].Name,' Number: ',docs[i].Number);
Which will output:
Name: Name 2 Number: 1
Name: Name 3 Number: 2
Name: Name 4 Number: 3
Name: Name 5 Number: 4
Name: Name 6 Number: 5
Name: Name 7 Number: 6
Name: Name 8 Number: 7
Name: Name 9 Number: 8
Name: Name 10 Number: 9
Name: Name 11 Number: 10
This overloaded FindDocs() method takes a query filter as JSON and parameters (following the MongoDB syntax), and a Projection mapping (null to retrieve all properties). Its returns a TVariantDynArray result, which was mapped to an optimized read-only TDataSet using the overloaded ToDataSet() function. So in our case, the DB grid has been filled with all people named 'John', with age greater than 21.
If you want to retrieve the documents directly as JSON, we can write:
var json: RawUTF8;
...
json := Coll.FindJSON(null,null);
writeln(json);
...
You can note that FindJSON() has two properties, which are the Query filter, and a Projection mapping (similar to the column names in a SELECT col1,col2). So we may have written:
Which will only return the "Name" and "_id" fields (since _id is, by MongoDB convention, always returned:
[{"_id":5,"Name":"Name 6"}]
To return only the "Name" field, you can specify '_id:0,Name:1' as JSON in extended syntax for the projection parameter.
[{"Name":"Name 6"}]
There are other methods able to retrieve data, also directly as BSON binary data. They will be used for best speed e.g. in conjunction with our ORM, but for most end-user code, using TDocVariant is safer and easier to maintain.
9.1.3.1. Updating or deleting documents
The TMongoCollection class has some methods dedicated to alter existing documents.
At first, the Save() method can be used to update a document which has been first retrieved:
You can refer to the documentation of the SynMongoDB.pas unit, to find out all functions, classes and methods available to work with MongoDB.
Some very powerful features are available, including Aggregation (available since MongoDB 2.2), which offers a good alternative to standard Map/Reduce pattern. See http://docs.mongodb.org/manual/reference/command/aggregate for reference.
9.1.4. Write Concern and Performance
You can take a look at the MongoDBTests.dpr sample - located in the SQLite3\Samples\24 - MongoDB sub-folder of the source code repository, and the TTestDirect classes, to find out some performance information.
In fact, this TTestDirect is inherited twice, to run the same tests with diverse write concern:
MongoDB TTestDirect classes hierarchyThe difference between the two classes will take place at client initialization:
wcAcknowledged is the default safe mode: the MongoDB server confirms the receipt of the write operation. Acknowledged write concern allows clients to catch network, duplicate key, and other errors. But it adds an additional round-trip from the client to the server, and wait for the command to be finished before returning the error status: so it will slow down the write process.
With wcUnacknowledged, MongoDB does not acknowledge the receipt of write operation. Unacknowledged is similar to errors ignored; however, drivers attempt to receive and handle network errors when possible. The driver's ability to detect network errors depends on the system's networking configuration.
The speed difference between the two is worth mentioning, as stated by the regression tests status, running on a local MongoDB instance:
1. Direct access
1.1. Direct with acknowledge:
- Connect to local server: 6 assertions passed 4.72ms
- Drop and prepare collection: 8 assertions passed 9.38ms
- Fill collection: 15,003 assertions passed 558.79ms 5000 rows inserted in 548.83ms i.e. 9110/s, aver. 109us, 3.1 MB/s
- Drop collection: no assertion 856us
- Fill collection bulk: 2 assertions passed 74.59ms 5000 rows inserted in 64.76ms i.e. 77204/s, aver. 12us, 7.2 MB/s
- Read collection: 30,003 assertions passed 2.75s
5000 rows read at once in 9.66ms i.e. 517330/s, aver. 1us, 39.8 MB/s
- Update collection: 7,503 assertions passed 784.26ms 5000 rows updated in 435.30ms i.e. 11486/s, aver. 87us, 3.7 MB/s - Delete some items: 4,002 assertions passed 370.57ms 1000 rows deleted in 96.76ms i.e. 10334/s, aver. 96us, 2.2 MB/s
Total failed: 0 / 56,527 - Direct with acknowledge PASSED 4.56s
1.2. Direct without acknowledge:
- Connect to local server: 6 assertions passed 1.30ms
- Drop and prepare collection: 8 assertions passed 8.59ms
- Fill collection: 15,003 assertions passed 192.59ms 5000 rows inserted in 168.50ms i.e. 29673/s, aver. 33us, 4.4 MB/s
- Drop collection: no assertion 845us
- Fill collection bulk: 2 assertions passed 68.54ms 5000 rows inserted in 58.67ms i.e. 85215/s, aver. 11us, 7.9 MB/s
- Read collection: 30,003 assertions passed 2.75s
5000 rows read at once in 9.99ms i.e. 500150/s, aver. 1us, 38.5 MB/s
- Update collection: 7,503 assertions passed 446.48ms 5000 rows updated in 96.27ms i.e. 51933/s, aver. 19us, 7.7 MB/s - Delete some items: 4,002 assertions passed 297.26ms 1000 rows deleted in 19.16ms i.e. 52186/s, aver. 19us, 2.8 MB/s
Total failed: 0 / 56,527 - Direct without acknowledge PASSED 3.77s
As you can see, the reading speed is not affected by the Write Concern settings. But data writing can be multiple times faster, when each write command is not acknowledged.
Since there is no error handling, wcUnacknowledged is not to be used on production. You may use it for replication, or for data consolidation, e.g. feeding a database with a lot of existing data as fast as possible.
As a result, our ORM is able to be used as a NoSQL and Object-Document Mapping (ODM) framework, with almost no code change. Any MongoDB database can be accessed via RESTful commands, using JSON over HTTP - see below.
This integration benefits from the other parts of the framework (e.g. our UTF-8 dedicated process, which is also the native encoding for BSON), so you can easily mix SQL and NoSQL databases with the exact same code, and are still able to tune any SQL or MongoDB request in your code, if necessary.
From the client point of view, there is no difference between a ORM or an ODM: you may use a SQL engine as a storage for ODM - via Shared nothing architecture (or sharding) - or even a NoSQL database as a regular ORM, with denormalization (even if it may void most advantages of NoSQL).
9.2.1. Define the TSQLRecord class
In the database model, we define a TSQLRecord class, as usual:
Note that we did not define any index ... values for the RawUTF8 property, as we need for external SQL databases, since MongoDB does not expect any restriction about text fields length (as far as I know, the only SQL engines which allow this natively without any performance penalty are SQlite3 and PostgreSQL).
The property values will be stored in the native MongoDB layout, i.e. with a better coverage than the SQL types recognized by our SynDB* unit:
Delphi
MongoDB
Remarks
byte
int32
word
int32
integer
int32
cardinal
N/A
You should use Int64 instead
Int64
int64
boolean
boolean
MongoDB has a boolean type
enumeration
int32
store the ordinal value of the enumerated item(i.e. starting at 0 for the first element)
set
int64
each bit corresponding to an enumerated item (therefore a set of up to 64 elements can be stored in such a field)
single
double
double
double
extended
double
stored as double (precision lost)
currency
double
stored as double (MongoDB does not have a BSD type)
32-bit RowID pointing to another record (warning: the field value contains pointer(RowID), not a valid object instance - the record content must be retrieved with late-binding via its ID using a PtrInt(Field) typecast or the Field.ID method), or by using e.g. CreateJoined() - is 64-bit on Win64
This type is an alias to RawByteString - those properties are not retrieved by default: you need to use RetrieveBlobFields() or set ForceBlobTransfert / ForceBlobTransertTable[] properties
BSON as defined in code by overriding TSQLRecord.InternalRegisterCustomProperties to produce true JSON
You can share the same TSQLRecord definition with MongoDB and other storage means, like external SQL databases. Unused information (like the index attribute) will just be ignored.
Note that TSQLRecord, TID and TRecordReference* published properties will automatically create an index on the corresponding field, and that a kind of ON DELETE SET DEFAULT tracking will take place for TSQLRecord and TRecordReference properties, and ON DELETE CASCADE for TRecordReferenceToBeDeleted - but not for TID, since we do not know which table to track.
9.2.2. Register the TSQLRecord class
On the server side (there won't be any difference for the client), you define a TMongoDBClient, and assign it to a given TSQLRecord class, via a call to StaticMongoDBRegister():
MongoClient := TMongoClient.Create('localhost',27017);
DB := MongoClient.Database['dbname'];
Model := TSQLModel.Create([TSQLORM]);
Client := TSQLRestClientDB.Create(Model,nil,':memory:',TSQLRestServerDB);
ifStaticMongoDBRegister(TSQLORM,fClient.Server,fDB,'collectionname')=nilthenraise Exception.Create('Error');
And... that's all!
If all the tables of a mORMot server should be hosted on a MongoDB server, you could call the StaticMongoDBRegisterAll() function instead:
As with external databases, you can specify the field names mapping between the objects and the MongoDB collection. By default, the TSQLRecord.ID property is mapped to the MongoDB's _id field, and the ORM will populate this _id field with a sequence of integer values, just like any TSQLRecord table. You can specify your own mapping, using e.g.:
aModel.Props[aClass].ExternalDB.MapField(..)
Since the field names are stored within the document itself, it may be a good idea to use shorter naming for the MongoDB collection. It may save some storage space, when working with a huge number of documents.
Once the TSQLRecord is mapped to a MongoDB collection, you can always have direct access to the corresponding TMongoCollection instance later on, using a simple transtyping:
As we already saw, the framework is able to handle any kind of properties, including complex types like dynamic arrays or variant. In the above code, a TDocVariant document has been stored in R.Value, and a dynamic array of integer values is accessed via its index 1 shortcut and the TSQLRecord.DynArray() method.
The usual Retrieve / Delete / Update methods are available:
R := TSQLORM.Create;
tryfor i := 1 to COLL_COUNT dobeginCheck(Client.Retrieve(i,R));// here R instance contains all values of one document, excluding BLOBsend;
finally
R.Free;
end;
You can define a WHERE clause, as if the back-end where a regular SQL database:
R := TSQLORM.CreateAndFillPrepare(Client,'ID=?',[i]);
try
...
9.2.4. ODM complex queries
To perform a query and retrieve the content of several documents, you can use regular CreateAndFillPrepare or FillPrepare methods:
R := TSQLORM.CreateAndFillPrepare(Client,WHERE_CLAUSE,[WHERE_PARAMETERS]);try
n := 0;
while R.FillOne dobegin// here R instance contains all values of one document, excluding BLOBs
inc(n);
end;
assert(n=COLL_COUNT);
finally
R.Free;
end;
A WHERE clause can also be defined for CreateAndFillPrepare or FillPrepare methods. This WHERE clause could contain several expressions, joined with AND / OR. Each of those expressions could use:
Or even any ...DynArrayContains() specific function.
The mORMot ODM will convert this SQL-like statement into the optimized MongoDB query expression, using e.g. a regular expression for the LIKE operator.
The LIMIT, OFFSET and ORDER BY clauses will also be handled as expected. A special care should be taken for an ORDER BY on textual values: by design, MongoDB will always sort text with case-sensitivity, which is not what we expect: so our ODM will sort such content on client side, after having been retrieved from the MongoDB server. For numerical fields, MongoDB sorting features will be processed on the server side.
The COUNT(*) function will also be converted into the proper MongoDB API call, so that such operations will be as costless as possible. DISTINCT() MAX() MIN() SUM() AVG() functions and the GROUP BY clause will also be converted into optimized MongoDB aggregation pipelines, on the fly. You could even set aliases for the columns (e.g. max(RowID) as first) and perform simple addition/substraction of an integer value.
Here are some typical WHERE clauses, and the corresponding MongoDB query document as generated by the ODM:
Note that parenthesis and mixed ANDOR expressions are not handled yet. You could always execute any complex NoSQL query (e.g. using aggregation functions or the Map/Reduce pattern) by using directly the TMongoCollection methods.
But for most business code, mORMot allows to share the same exact code between your regular SQL databases or NoSQL engines. You do not need to learn the MongoDB query syntax: the ODM will compute the right expression for you, depending on the database engine it runs on.
9.2.5. BATCH mode
In addition to individual CRUD operations, our MongoDB is able to use BATCH mode for adding or deleting documents.
You can write the exact same code as with any SQL back-end:
Client.BatchStart(TSQLORM);
for i := 5 to COLL_COUNT doif i mod 5=0 then
assert(fClient.BatchDelete(i)>=0);
assert(Client.BatchSend(IDs)=HTTP_SUCCESS);
Speed benefit may be huge in regard to individual Add/Delete operations, even on a local MongoDB server. We will see some benchmark numbers now.
9.2.6. ORM/ODM performance
You can take a look at Data access benchmark to compare MongoDB as back-end for our ORM classes.
In respect to external SQL engines, it features very high speed, low CPU use, and almost no difference in use. We interfaced the BatchAdd() and BatchDelete() methods to benefit of MongoDB BULK process, and avoided most memory allocation during the process.
Here are some numbers, extracted from the MongoDBTests.dpr sample, which reflects the performance of our ORM/ODM, depending on the Write Concern mode used:
2. ORM
2.1. ORM with acknowledge:
- Connect to local server: 6 assertions passed 18.65ms
- Insert: 5,002 assertions passed 521.25ms
5000 rows inserted in 520.65ms i.e. 9603/s, aver. 104us, 2.9 MB/s
- Insert in batch mode: 5,004 assertions passed 65.37ms
5000 rows inserted in 65.07ms i.e. 76836/s, aver. 13us, 8.4 MB/s
- Retrieve: 45,001 assertions passed 640.95ms
5000 rows retrieved in 640.75ms i.e. 7803/s, aver. 128us, 2.1 MB/s
- Retrieve all: 40,001 assertions passed 20.79ms
5000 rows retrieved in 20.33ms i.e. 245941/s, aver. 4us, 27.1 MB/s
- Retrieve one with where clause: 45,410 assertions passed 673.01ms
5000 rows retrieved in 667.17ms i.e. 7494/s, aver. 133us, 2.0 MB/s
- Update: 40,002 assertions passed 681.31ms
5000 rows updated in 660.85ms i.e. 7565/s, aver. 132us, 2.4 MB/s
- Blobs: 125,003 assertions passed 2.16s
5000 rows updated in 525.97ms i.e. 9506/s, aver. 105us, 2.4 MB/s
- Delete: 38,003 assertions passed 175.86ms
1000 rows deleted in 91.37ms i.e. 10944/s, aver. 91us, 2.3 MB/s
- Delete in batch mode: 33,003 assertions passed 34.71ms
1000 rows deleted in 14.90ms i.e. 67078/s, aver. 14us, 597 KB/s
Total failed: 0 / 376,435 - ORM with acknowledge PASSED 5.00s
2.2. ORM without acknowledge:
- Connect to local server: 6 assertions passed 16.83ms
- Insert: 5,002 assertions passed 179.79ms
5000 rows inserted in 179.15ms i.e. 27908/s, aver. 35us, 3.9 MB/s
- Insert in batch mode: 5,004 assertions passed 66.30ms
5000 rows inserted in 31.46ms i.e. 158891/s, aver. 6us, 17.5 MB/s
- Retrieve: 45,001 assertions passed 642.05ms
5000 rows retrieved in 641.85ms i.e. 7789/s, aver. 128us, 2.1 MB/s
- Retrieve all: 40,001 assertions passed 20.68ms
5000 rows retrieved in 20.26ms i.e. 246718/s, aver. 4us, 27.2 MB/s
- Retrieve one with where clause: 45,410 assertions passed 680.99ms
5000 rows retrieved in 675.24ms i.e. 7404/s, aver. 135us, 2.0 MB/s
- Update: 40,002 assertions passed 231.75ms
5000 rows updated in 193.74ms i.e. 25807/s, aver. 38us, 3.6 MB/s
- Blobs: 125,003 assertions passed 1.44s
5000 rows updated in 150.58ms i.e. 33202/s, aver. 30us, 2.6 MB/s
- Delete: 38,003 assertions passed 103.57ms
1000 rows deleted in 19.73ms i.e. 50668/s, aver. 19us, 2.4 MB/s
- Delete in batch mode: 33,003 assertions passed 47.50ms
1000 rows deleted in 364us i.e. 2747252/s, aver. 0us, 23.4 MB/s
Total failed: 0 / 376,435 - ORM without acknowledge PASSED 3.44s
As for direct MongoDB access, the wcUnacknowledged is not to be used on production, but may be very useful in some particular scenarios. As expected, the reading process is not impacted by the Write Concern mode set.
10. JSON RESTful Client-Server
Adopt a mORMotBefore describing the Client-Server design of this framework, we may have to detail some standards it is based on:
JSON as its internal data storage and transmission format;
REST as its Client-Server architecture.
10.1. JSON
10.1.1. Why use JSON?
As we just stated, the JSON format is used internally in this framework. By definition, the JavaScript Object Notation (JSON) is a standard, open and lightweight computer data interchange format.
Double precision floating-point format in JavaScript, generally depends on implementation. There is no specific integer type
String
Double-quoted Unicode, with backslash escaping
Boolean
true or false
Array
An ordered sequence of values, comma-separated and enclosed in square brackets; the values do not need to be of the same type
Object
An unordered collection of key:value pairs with the ':' character separating the key and the value, comma-separated and enclosed in curly braces; the keys must be strings and should be distinct from each other
Non-significant white space may be added freely around the "structural characters" (i.e. brackets "{ } [ ]", colons ":" and commas ",").
The following example shows the JSON representation of an object that describes a person. The object has string fields for first name and last name, a number field for age, an object representing the person's address and an array of phone number objects.
Usage of this layout, instead of other like XML or any proprietary format, results in several particularities:
Like XML, it is a text-based, human-readable format for representing simple data structures and associative arrays (called objects);
It's easier to read (for both human beings and machines), quicker to implement, and much smaller in size than XML for most use;
It's a very efficient format for data caching;
Its layout allows to be rewritten in place into individual zero-terminated UTF-8 strings, with almost no wasted space: this feature is used for very fast JSON to text conversion of the tables results, with no memory allocation nor data copy;
It's natively supported by the JavaScript language, making it a perfect serialization format in any AJAX (i.e. Web 2.0) or HTML5 Mobile application;
The JSON format is simple, and specified in a short and clean RFC document;
The default text encoding for both JSON and SQLite3 is UTF-8, which allows the full Unicode char-set to be stored and communicated;
It is the default data format used by ASP.NET AJAX services created in Windows Communication Foundation (WCF) since .NET framework 3.5; so it's Microsoft officially "ready";
For binary BLOB transmission, we simply encode the binary data as Base64; please note that, by default, BLOB fields are not transmitted over REST with other fields in JSON objects, see below (only exception are dynamic array fields, which are transmittest within the other fields).
REST JSON serialization will indeed be used in our main ORM to process of any TSQLRecord published properties, and in the interface-based SOA architecture of the framework, for content transmission.
In the framework, the whole http://json.org standard is implemented, with some exceptions/extensions:
#0 characters will indicate the end of input, as with almost all JSON libraries - so if your text input contains a #0 char, please handle it as binary (note that other control chars are escaped as expected);
You may use an "extended syntax" (used e.g. by MongoDB) by unquoting ASCII-only property names;
Floating point numbers are sometimes limited to currency (i.e. 4 decimals), to ensure serialization/unserialization won't loose precision; but in such cases, it can be extended to the double precision via a set of options;
There is no 53-bit limitation for integers, as with JavaScript: the framework handle 64-bit integer values - when using a JavaScript back-end, you may have to transmit huge values as text.
In practice, JSON has been found out to be very easy to work with and stable. A binary format is not used for transmission yet, but is available at other level of the framework, e.g. as an possible file format for in-memory TObjectList database engine (with our SynLZ compression - see Virtual Tables magic).
10.1.2. Values serialization
Standard Delphi value types are serialized directly within the JSON content, in their textual representation. For instance, integer or Int64 are stored as numbers, and double values are stored as their corresponding floating-point representation.
All string content is serialized as standard JSON text field, i.e. nested with double quotes ("). Since JSON uses UTF-8 encoding, it is one of the reasons why we introduced the RawUTF8 type, and use it everywhere in our framework.
10.1.3. Record serialization
In Delphi, the record has some nice advantages:
record are value objects, i.e. accessed by value, not by reference - this can be very convenient, e.g. when defining Domain-Driven Design;
record can contain any other record or dynamic array, so are very convenient to work with (no need to define sub-classes or lists);
record variables can be allocated on stack, so won't solicited the global heap;
record instances automatically freed by the compiler when they come out of scope, so you won't need to write any try..finally Free; end block.
Serialization of record values are therefore a must-have for a framework like mORMot. In practice, the record types should be defined as packed record, so that low-level access will be easier to manage by the serializers.
10.1.3.1. Automatic serialization via Enhanced RTTI
Since Delphi 2010, the compiler generates additional RTTI at compilation, so that all record fields are described, and available at runtime. By the way, this enhanced RTTI is one of the reasons why executables did grow so much in newer versions of the compiler.
Our SynCommons.pas unit is able to use this enhanced information, and let any record be serialized via RecordLoad() and RecordSave() functions, and all internal JSON marshalling process.
In short, you have nothing to do. Just use your record as parameters, and, with Delphi 2010 and up, they will be serialized as valid JSON objects. The only restriction is that the records should be defined as packed record.
10.1.3.2. Serialization for older Delphi versions
Sadly, the information needed to serialize a record is available only since Delphi 2010.
If your application is developped on any older revision (e.g. Delphi 7, Delphi 2007 or Delphi 2009), you won't be able to automatically serialize records as plain JSON objects directly.
You have several paths available:
By default, the record will be serialized as binary, and encoded as Base64 text;
Or you can define method callbacks which will write or read the data as you expect;
Or you can define the record layout as plain text.
Note that any custom serialization (either via callbacks, or via text definition), will override any previous registered method, even the mechanism using the enhanced RTTI. You can change the default serialization to easily meet your requirements. For instance, this is what SynCommons.pas does for any TGUID content, which is serialized as the standard JSON text layout (e.g. "C9A646D3-9C61-4CB7-BFCD-EE2522C8F633"), and not following the TGUID record layout as defined in the RTTI , i.e. {"D1":12345678,"D2":23023,"D3":9323,"D4":"0123456789ABCDEF"} - which is far from convenient.
10.1.3.2.1. Default Binary/Base64 serialization
On any version of the compiler prior to Delphi 2010, any record value will be serialized by default with a proprietary binary (and optimized) layout - i.e. via RecordLoad and RecordSave functions - then encoded as Base64, to be stored as plain text within the JSON stream.
A special UTF-8 prefix (which does not match any existing Unicode glyph) is added at the beginning of the resulting JSON string to identify this content as a BLOB, as such:
{ "MyRecord": "ï¿°w6nDoMOnYQ==" }
You will find in SynCommons.pas unit both BinToBase64 and Base64ToBin functions, very optimized for speed. Base64 encoding was chosen since it is standard, much more efficient than hexadecimal, and still JSON compatible without the need to escape its content.
When working with most part of the framework, you do not have anything to do: any record will by default follow this Base64 serialization, so you will be able e.g. to publish or consume interface-based services with records.
10.1.3.2.2. Custom serialization
Base64 encoding is pretty convenient for a computer (it is a compact and efficient format), but it is very limited about its interoperability. Our format is proprietary, and will use the internal Delphi serialization scheme: it means that it won't be readable nor writable outside the scope of your own mORMot applications. In a RESTful/SOA world, this sounds not like a feature, but a limitation.
Custom recordJSON serialization can therefore be defined, as with any class - see below. It will allow writing and parsing record variables as regular JSON objects, ready to be consumed by any client or server. Internally, some callbacks will be used to perform the serialization.
In fact, there are two entry points to specify a custom JSON serialization for record:
When setting a custom dynamic array JSON serializer - see below - the associated record will also use the same Reader and Writer callbacks;
By setting explicitly serialization callbacks for the TypeInfo() of the record, with the very same TTextWriter. RegisterCustomJSONSerializer method used for dynamic arrays.
Then the Reader and Writer callbacks can be defined by two means:
By hand, i.e. coding the methods with manual conversion to JSON text or parsing;
Via some text-based type definition, which will follow the record layout, but will do all the marshalling (including memory allocation) on its own.
10.1.3.2.3. Defining callbacks
For instance, if you want to serialize the following record:
In the above code, the cardinal field named Timestamp is type-casted to a Int64: in fact, as stated by the documentation of the AddJSONEscape method, an array of const will handle by default any cardinal as an integer value (this is a limitation of the Delphi compiler). By forcing the type to be an Int64, the expected cardinal value will be transmitted, and not a wrongly negative versions for numbers > $7fffffff.
On the other side, the corresponding reader callback will be like:
Here JSONDecode() is used for fast deserialization of a JSON object.
10.1.3.2.4. Text-based definition
Writing those callbacks by hand could be error-prone, especially for the Reader event.
You can use the TTextWriter.RegisterCustomJSONSerializerFromText method to define the record layout in a convenient text-based format. Once more, those types need to be defined as packed record, so that the text layout definition will not depend on compiler-specific field alignment.
Both declarations will do the same definition. Note that the supplied text should match exactly the original record type definition: do not swap or forget any property!
By convention, we use two underscore characters (__) before the record type name, to easily identify the layout definition. It may indeed be convenient to write it as a constant, close to the record type definition itself, and not in-lined at RegisterCustomJSONSerializerFromText() call level.
You can also unserialize some existing JSON content:
U := '{"ID":210,"Timestamp":2200,"JSON":"test2"}';
RecordLoadJSON(Cache,@U[1],TypeInfo(TSQLRestCacheEntryValue));
Check(Cache.ID=210);
Check(Cache.Timestamp=2200);
Check(Cache.JSON='test2');
Note that this text-based definition is very powerful, and is able to handle any level of nested record or dynamic arrays.
By default, it will write the JSON content in a compact form, and will expect only existing fields to be available in the incoming JSON. You can specify some options at registration, to ignore all non defined fields. It can be very useful when you want to consume some remote service, and are interested only in a few fields.
For instance, we may define a client access to a RESTful service like api.github.com:
type
TTestCustomJSONGitHub = packedrecord
name: RawUTF8;
id: cardinal;
description: RawUTF8;
fork: boolean;
owner: record
login: RawUTF8;
id: cardinal;
end;
end;
TTestCustomJSONGitHubs = arrayof TTestCustomJSONGitHub; const
__TTestCustomJSONGitHub = 'name RawUTF8 id cardinal description RawUTF8 '+
'fork boolean owner{login RawUTF8 id cardinal}';
Note the { } format to define a nested record, as a shorter alternative to a nested record .. end syntax.
It is also mandatory that you declare the record as packed. Otherwise, you may have unexpected access violation issues, since alignement may vary, depending on local setting, and compiler revision.
Now we can register the record layout, and provide some additional options:
soReadIgnoreUnknownFields to ignore any non defined field in the incoming JSON;
soWriteHumanReadable to let the output JSON be more readable.
Then the JSON can be parsed then emitted as such:
var git: TTestCustomJSONGitHubs;
...
U := zendframeworkJson;
Check(DynArrayLoadJSON(git,@U[1],TypeInfo(TTestCustomJSONGitHubs))<>nil);U := DynArraySaveJSON(git,TypeInfo(TTestCustomJSONGitHubs));
You can see that the record serialization is auto-magically available at dynamic array level, which is pretty convenient in our case, since the api.github.com RESTful service returns a JSON array.
It will convert 160 KB of very verbose JSON information:
During the parsing process, all unneeded JSON members will just be ignored. The parser will jump the data, without doing any temporary memory allocation. This is a huge difference with other existing Delphi JSON parsers, which first create a tree of all JSON values into memory, then allow to browse all the branches on request.
Note also that the fields have been ordered following the TTestCustomJSONGitHub record definition, which may not match the original JSON layout (here name/id fields order is inverted, and owner is set at the end of each item, for instance).
With mORMot, you can then access directly the content from your Delphi code as such:
if git[0].id=8079771 thenbegin
Check(git[0].name='Component_ZendAuthentication');
Check(git[0].description='Authentication component from Zend Framework 2');
Check(git[0].fork=true);
Check(git[0].owner.login='zendframework');
Check(git[0].owner.id=296074);
end;
Note that we do not need to use intermediate objects (e.g. via some obfuscated expressions like gitarray.Value[0].Value['owner'].Value['login']). Your code will be much more readable, will complain at compilation if you misspell any field name, and will be easy to debug within the IDE (since the record layout can be easily inspected).
The serialization is able to handle any kind of nested record or dynamic arrays, including dynamic arrays of simple types (e.g. array of integer or array of RawUTF8), or dynamic arrays of record:
const
__TTestCustomJSONRecord = 'A,B,C integer D RawUTF8 E{E1,E2 double} F TDateTime';
__TTestCustomJSONArray = 'A,B,C integer D RawByteString E[E1 double E2 string] F TDateTime';
__TTestCustomJSONArraySimple = 'A,B Int64 C array of synunicode D RawUTF8';
The following types are handled by this feature:
Delphi type
Remarks
boolean
Serialized as JSON boolean
byte word integer cardinal Int64 single double currency TUnixTime
For other types (like enumerations or sets), you can simply use the unsigned integer types corresponding to the binary value, e.g. byte word cardinal Int64 (depending on the sizeof() of the initial value).
For instance, void TTestCustomJSONRecord may be serialized as:
Or void TTestCustomJSONArray may be serialized as:
{"A":0,"B":0,"C":0,"D":null,"E":[],"F":""}
Or void TTestCustomJSONArraySimple may be serialized as:
{"A":0,"B":0,"C":[],"D":""}
You can refer to the supplied regression tests (in TTestLowLevelTypes.EncodeDecodeJSON) for some more examples of custom JSON serialization.
10.1.4. Dynamic array serialization
10.1.4.1. Standard JSON arrays
Note that dynamic arrays are handled in two separated contexts:
Within the ORM part of the framework, they are stored as BLOB and always transmitted after Base64 encoding - see TSQLRecord fields definition;
Within the scope of interface-based services, dynamic arrays values and parameters are using the advanced JSON serialization made available in the TDynArray wrapper, i.e. could be either a true JSON array, or, in default, use generic binary and Base64 encoding, prior to Delphi 2010.
In fact, this TDynArray wrapper - see TDynArray dynamic array wrapper - recognizes most common kind of dynamic arrays, like array of byte, word, integer, cardinal, Int64, double, currency, RawUTF8, SynUnicode, WinAnsiString, string. They will be serialized as a valid JSON array, i.e. a list of valid JSON elements of the matching type (number, floating-point value or string). If you have any ideas of standard dynamic arrays which should be handled, feel free to post your proposal in the forum!
Since Delphi 2010, the framework will use the enhanced RTTI to create a JSON array corresponding to the data layout of each dynamic array item, just as for Record serialization.
For version of the compiler up to Delphi 2009, not-known dynamic arrays (like any array of packed record) will be serialized by default as binary, then Base64 encoded. This method will always work, but won't be easy to deal with from an AJAX client.
Of course, your applications can supply a custom JSON serialization for any other dynamic array, via the TTextWriter.RegisterCustomJSONSerializer() class method. Two callbacks are to be defined in association with dynamic array type information, in order to handle proper serialization and un-serialization of the JSON array. As an alternative, you can call the RegisterCustomJSONSerializerFromText method to define the record layout in a convenient text-based format - see above.
In fact, if you register a dynamic array custom serializer, it will also be used for the associated internal record.
10.1.4.2. Customized serialization
As we already stated, it may be handy to change the default serialization.
For instance, we would like to serialize a dynamic array of the following record:
With the default serialization, such a dynamic array will be serialized either:
As a Base64 encoded binary buffer, before Delphi 2010 - this won't be easy to understand from an AJAX client, for instance;
As a JSON array of JSON object, with all property names listed within each object, since Delphi 2010 and its enhanced RTTI.
This default serialization can be overriden, by defining callbacks. It could be handy, e.g. if you do not like the fact that all field names are written in the data, which may be a waste of space:
In order to add a custom serialization for this kind of record, we need to implement the two needed callbacks. Our expected format will be a JSON array of all fields, i.e.:
[1,2001,3001,4001,"1","1001"]
This layout is more than two times shorter than the default JSON object format.
We may have used another layout, e.g. using JSONEncode() function and a JSON object layout, or any other valid JSON content.
Here comes the writer:
classprocedure TCollTstDynArray.FVWriter(const aWriter: TTextWriter; const aValue);
var V: TFV absolute aValue;
begin
aWriter.Add('[%,%,%,%,"%","%"]',
[V.Major,V.Minor,V.Release,V.Build,V.Main,V.Detailed],twJSONEscape);
end;
This event will write one entry of the dynamic array, without the last ',' (which will be appended by TTextWriter. AddDynArrayJSON). In this method, twJSONEscape is used to escape the supplied string content as a valid JSON string (with double quotes and proper UTF-8 encoding).
Of course, the Writer is easier to code than the Reader itself:
classfunction TCollTstDynArray.FVReader(P: PUTF8Char; var aValue;
out aValid: Boolean): PUTF8Char;
var V: TFV absolute aValue;
begin// '[1,2001,3001,4001,"1","1001"],[2,2002,3002,4002,"2","1002"],...'
aValid := false;
result := nil;
if (P=nil) or (P^<>'[') then
exit;
inc(P);
V.Major := GetNextItemCardinal(P);
V.Minor := GetNextItemCardinal(P);
V.Release := GetNextItemCardinal(P);
V.Build := GetNextItemCardinal(P);
V.Main := UTF8ToString(GetJSONField(P,P));
V.Detailed := UTF8ToString(GetJSONField(P,P));
if P=nilthen
exit;
aValid := true;
result := P; // ',' or ']' for last item of arrayend;
The reader method shall return a pointer to the next separator of the JSON input buffer just after this item (either ',' or ']').
Then, from the user code point of view, this dynamic array handling won't change: once registered, the JSON serializers are used everywhere in the framework, as soon as this type is globally registered.
Here is a Writer method using a JSON object layout, which may be used for Delphi up to 2009, to obtain a serialization similar to the one generated via the enhanced RTTI.
classprocedure TCollTstDynArray.FVWriter2(const aWriter: TTextWriter; const aValue);
var V: TFV absolute aValue;
begin
aWriter.AddJSONEscape(['Major',V.Major,'Minor',V.Minor,'Release',V.Release,
'Build',V.Build,'Main',V.Main,'Detailed',V.Detailed]);
end;
We may also use similar callbacks, e.g. if we want the property names to be changed, or ignored depending on some default values.
Then the corresponding Reader callback could be written as:
classfunction TCollTstDynArray.FVReader2(P: PUTF8Char; var aValue;
out aValid: Boolean): PUTF8Char;
var V: TFV absolute aValue;
Values: array[0..5] ofTValuePUTF8Char;
begin
aValid := false;
result := JSONDecode(P,['Major','Minor','Release','Build','Main','Detailed'],@Values);
if result=nilthen
exit; // result^ = ',' or ']' for last item of array
V.Major := Values[0].ToInteger;
V.Minor := Values[1].ToInteger;
V.Release := Values[2].ToInteger;
V.Build := Values[3].ToInteger;
V.Main := Values[4].ToString;
V.Detailed := Values[5].ToString;
aValid := true;
end;
Most of the JSON decoding process is performed within the JSONDecode() function, which will let Values[].Value/ValueLen couples point to null-terminated un-escaped content within the P^ buffer. In fact, unserialization will do no memory allocation, and will therefore be very fast.
If you want to go back to the default binary + Base64 encoding serialization, you may run the registering method as such:
You can define now your custom JSON serializers, starting for the above code as reference, or via the RegisterCustomJSONSerializerFromText() method text-based definition.
Note that if the record corresponding to its item dynamic array has some associated RTTI (i.e. if it contains some reference-counted types, like any string), it will be serialized as JSON during the mORMot service process, just as stated with Record serialization.
Classes with published properties, i.e. every class inheriting from TPersistent or our ORM-dedicated TSQLRecord class will be serialized as a true JSON object, containing all their published properties values. See TSQLRecord fields definition for a corresponding table with the ORM database types and the JSON content.
List of Delphi strings, i.e. TStrings kind of classes will be serialized as a JSON array of strings. This is the reason why we also introduced a dedicated TRawUTF8List class, for direct UTF-8 content storage, via our dedicated RawUTF8 type, reducing the need of encoding conversion, therefore increasing process speed.
10.1.6. TObject serialization
In fact, any TObject can be serialized as JSON in the whole framework: not only for the ORM part (for published properties), but also for SOA (as parameters of interface-based service methods). All JSON serialization is centralized in ObjectToJSON() and JSONToObject() (aka TJSONSerializer.WriteObject) functions.
10.1.6.1. Custom class serialization
In some cases, it may be handy to have a custom serialization, for instance if you want to manage some third-party classes, or to adapt the serialization scheme to a particular purpose, at runtime.
You can add a customized serialization of any class, by calling the TJSONSerializer. RegisterCustomSerializer class method. Two callbacks are to be defined for a specific class type, and will be used to serialize or un-serialize the object instance. The callbacks are class methods (procedure() of object), and not plain functions (for some evolved objects, it may have sense to use a context during serialization).
In the current implementation of this feature, callbacks expect low-level implementation. That is, their implementation code shall follow function JSONToObject() patterns, i.e. calling low-level GetJSONField() function to decode the JSON content, and follow function TJSONSerializer.WriteObject() patterns, i.e. aSerializer.Add/AddInstanceName/AddJSONEscapeString to encode the class instance as JSON.
Note that the process is called outside the "{...}" JSON object layout, allowing any serialization scheme: even a class content can be serialized as a JSON string, JSON array or JSON number, on request.
For instance, we'd like to customize the serialization of this class (defined in SynCommons.pas):
By default, since it has been defined within {$M+} ... {$M-} conditionals, RTTI is available for the published properties (just as if it were inheriting from TPersistent). That is, the default JSON serialization will be for instance:
We will therefore define the Writer callback, as such:
classprocedure TCollTstDynArray.FVClassWriter(const aSerializer: TJSONSerializer;
aValue: TObject; aOptions: TTextWriterWriteObjectOptions);
var V: TFileVersionabsolute aValue;
begin
aSerializer.AddJSONEscape(['Major',V.Major,'Minor',V.Minor,'Release',V.Release,
'Build',V.Build,'Main',V.Main,'BuildDateTime',DateTimeToIso8601Text(V.BuildDateTime)]);
end;
Most of the JSON serialization work will be made within the AddJSONEscape method, expecting the JSON object description as an array of name/value pairs.
Then the associated Reader callback could be, for instance:
classfunction TCollTstDynArray.FVClassReader(const aValue: TObject; aFrom: PUTF8Char;
var aValid: Boolean; aOptions: TJSONToObjectOptions): PUTF8Char;
var V: TFileVersionabsolute aValue;
Values: array[0..5] ofTValuePUTF8Char;
begin
result := JSONDecode(aFrom,['Major','Minor','Release','Build','Main','BuildDateTime'],@Values);
aValid := (result<>nil);
if aValid thenbegin
V.Major := Values[0].ToInteger;
V.Minor := Values[1].ToInteger;
V.Release := Values[2].ToInteger;
V.Build := Values[3].ToInteger;
V.Main := Values[4].ToString;
V.BuildDateTime := Iso8601ToDateTimePUTF8Char(Values[5].Value,Values[5].ValueLen);
end;
end;
Here, the JSONDecode function will un-serialize the JSON object into an array of PUTF8Char values, without any memory allocation (in fact, Values[].Value will point to un-escaped and #0 terminated content within the aFrom memory buffer. So decoding is very fast.
Then, the registration step will be defined as such:
This will reset the JSON serialization of the specified class to the default serializer (i.e. writing of published properties).
The above code uses some low-level functions of the framework (i.e. AddJSONEscape and JSONDecode) to implement serialization as a JSON object, but you may use any other serialization scheme, on need. That is, you may serialize the whole class instance just as one JSON string or numerical value, or even a JSON array. It will depend of the implementation of the Reader and Writer registered callbacks.
You can even serialize TObjectList instances as a valid JSON array, with the ability to store each instance class name, so allowing the storage of non uniformous lists of objects. Calling TJSONSerializer.RegisterClassForJSON() is just needed to register each TObject class in its internal tables, and be able to create instances from a class name serialized in each JSON object.
In fact, if ObjectToJSON() or TJSONWriter.WriteObject() have their woStoreClassName option defined, a new "ClassName": field will be written as first field of the serialized JSON object.
// register the type (but Classes.RegisterClass list is also checked)TJSONSerializer.RegisterClassForJSON([TComplexNumber]);
// create an instance by reading the textual class name field
J := '{"ClassName":"TComplexNumber", "Real": 10.3, "Imaginary": 7.92 }';
P := @J[1]; // make local copy of constant
Comp := TComplexNumber(JSONToNewObject(P,Valid));
// here Comp is a valid unserialized object :)
Check(Valid);
Check(Comp.ClassType=TComplexNumber);
CheckSame(Comp.Real,10.3);
CheckSame(Comp.Imaginary,7.92);
// do not forget to free the memory (Comp can be nill if JSON was not valid)
Comp.Free;
Internal TObjectList process will therefore rely on a similar process, creating the proper class instances on the fly. You can even have several classes appearing in one TObjectList: the only prerequisite is that all class types shall have been previously registered on both sides, by a call to TJSONSerializer.RegisterClassForJSON().
10.2. REST
10.2.1. What is REST?
Representational state transfer (REST) is a style of software architecture for distributed hypermedia systems such as the World Wide Web. As such, it is not just a method for building "web services". The terms "representational state transfer" and "REST" were introduced in 2000 in the doctoral dissertation of Roy Fielding, one of the principal authors of the Hypertext Transfer Protocol (HTTP) specification, on which the whole Internet rely.
There are 5 basic fundamentals of web which are leveraged to create REST services:
Everything is a Resource;
Every Resource is Identified by a Unique Identifier;
Use Simple and Uniform Interfaces;
Communication is Done by Representation;
Every Request is Stateless.
10.2.1.1. Resource-based
Internet is all about getting data. This data can be in a format of web page, image, video, file, etc. It can also be a dynamic output like get customers who are newly subscribed. The first important point in REST is start thinking in terms of resources rather than physical files.
In REST, we add one more constraint to the current URI: in fact, every URI should uniquely represent every item of the data collection.
For instance, you can see the below unique URI format for customer and orders fetched:
Customer data
URI
Get Customer details with name "dupont"
http://www.mysite.com/Customer/dupont
Get Customer details with name "smith"
http://www.mysite.com/Customer/smith
Get orders placed by customer "dupont"
http://www.mysite.com/Customer/dupont/Orders
Get orders placed by customer "smith"
http://www.mysite.com/Customer/smith/Orders
Here, "dupont" and "smith" are used as unique identifiers to specify a customer. In practice, a name is far from unique, therefor most systems use an unique ID (like an integer, a hexadecimal number or a GUID).
10.2.1.3. Interfaces
To access those identified resources, basic CRUD activity is identified by a set of HTTP verbs:
HTTP method
Action
GET
List the members of the collection (one or several)
PUT
Update a member of the collection
POST
Create a new entry in the collection
DELETE
Delete a member of the collection
Then, at URI level, you can define the type of collection, e.g. http://www.mysite.com/Customer to identify the customers or http://www.mysite.com/Customer/1234/Orders to access a given order.
This combinaison of HTTP method and URI replace a list of English-based methods, like GetCustomer / InsertCustomer / UpdateOrder / RemoveOrder.
10.2.1.4. By Representation
What you are sending over the wire is in fact a representation of the actual resource data.
The main representation schemes are XML and JSON.
For instance, here is how a customer data is retrieved from a GET method:
Below is a simple JSON snippet for creating a new customer record with name and address (since we create a new record, here we named him "Dupond" - with an ending D - not "Dupont"):
As a result to this data transmitted with a POST command, the RESTful server will return the just-created ID.
See JSON for the reasons why in mORMot, we prefer to use JSON format.
10.2.1.5. Stateless
Every request should be an independent request so that we can scale up using load balancing techniques.
Independent request means with the data also send the state of the request so that the server can carry forward the same from that level to the next level.
The Synopse mORMot Framework was designed in accordance with Fielding's REST architectural style without using HTTP and without interacting with the World Wide Web. Such Systems which follow REST principles are often referred to as "RESTful". Optionally, the Framework is able to serve standard HTTP/1.1 pages over the Internet (by using the mORMotHttpClient / mORMotHttpServer units and the TSQLHttpServer and TSQLHttpClient classes), in an embedded low resource and fast HTTP server.
The standard RESTful methods are implemented, i.e. GET/PUT/POST/DELETE.
The following methods were added to the standard REST definition, for locking individual records and for handling database transactions (which speed up database process):
LOCK to lock a member of the collection;
UNLOCK to unlock a member of the collection;
BEGIN to initiate a transaction;
END to commit a transaction;
ABORT to rollback a transaction.
The GET method has an optional pagination feature, compatible with the YUI DataSource Request Syntax for data pagination - see TSQLRestServer.URI method and http://developer.yahoo.com/yui/datatable/#data . Of course, this breaks the "Every Resource is Identified by a Unique Identifier" RESTful principle - but it is much more easy to work with, e.g. to implement paging or custom filtering.
From the Delphi code point of view, a RESTful Client-Server architecture is implemented by inheriting some common methods and properties from a main class.
TSQLRestClient classes hierarchyThis diagram states how the TSQLRest class implements a common ancestor for both Client and Server classes.
10.2.2.1. BLOB fields
BLOB fields are defined as TSQLRawBlobpublished properties in the classes definition - which is an alias to the RawByteString type (defined in SynCommons.pas for Delphi up to 2007, since it appeared only with Delphi 2009). But their content is not included in standard RESTful methods of the framework, to spare network bandwidth.
The RESTful protocol allows BLOB to be retrieved (GET) or saved (PUT) via a specific URL, like:
ModelRoot/TableName/TableID/BlobFieldName
This is even better than the standard JSON encoding, which works well but convert BLOB to/from hexadecimal values, therefore need twice the normal size of it. By using such dedicated URL, data can be transfered as full binary.
Some dedicated methods of the generic TSQLRest class handle BLOB fields: RetrieveBlob and UpdateBlob.
10.2.2.2. JSON representation
The "04 - HTTP Client-Server" sample application available in the framework source code tree can be used to show how the framework is AJAX-ready, and can be proudly compared to any other REST server (like CouchDB) also based on JSON.
First desactivate the authentication - see below - by changing the parameter from true to false in Unit2.pas:
DB := TSQLRestServerDB.Create(Model,ChangeFileExt(paramstr(0),'.db3'),
false);
and by commenting the following line in Project04Client.dpr:
Then you can use your browser to test the JSON content:
Start the Project04Server.exe program: the background HTTP server, together with its SQLite3 database engine;
Start any Project04Client.exe instances, and add/find any entry, to populate the database a little;
Close the Project04Client.exe programs, if you want;
Open your browser, and type into the address bar:
http://localhost:8080/root
You'll see an error message:
TSQLHttpServer Server Error 400
Type into the address bar:
http://localhost:8080/root/SampleRecord
You'll see the result of all SampleRecord IDs, encoded as a JSON list, e.g.
[{"ID":1},{"ID":2},{"ID":3},{"ID":4}]
Type into the address bar:
http://localhost:8080/root/SampleRecord/1
You'll see the content of the SampleRecord of ID=1, encoded as JSON, e.g.
{"ID":1,"Time":"2010-02-08T11:07:09","Name":"AB","Question":"To be or not to be"}
Type into the address bar any other REST command, and the database will reply to your request...
You have got a full HTTP/SQLite3 RESTful JSON server in less than 400 KB.
Note that Internet Explorer or old versions of FireFox do not recognize the application/json; charset=UTF-8 content type to be viewed internally. This is a limitation of those softwares, so above requests will download the content as .json files, but won't prevent AJAX requests to work as expected.
10.2.2.3. Stateless ORM
Our framework is implementing REST as a stateless protocol, just as the HTTP/1.1 protocol it could use as its communication layer.
A stateless server is a server that treats each request as an independent transaction that is unrelated to any previous request.
At first, you could find it a bit disappointing from a classic Client-Server approach. In a stateless world, you are never sure that your Client data is up-to-date. The only place where the data is safe is the server. In the web world, it's not confusing. But if you are coming from a rich Client background, this may concern you: you should have the habit of writing some synchronization code from the server to replicate all changes to all its clients. This is not necessary in a stateless architecture any more.
The main rule of this architecture is to ensure that the Server is the only reference, and that the Client is able to retrieve any pending update from the Server side. That is, always modify a record content on a server side, then refresh the client to retrieve the modified value. Do not modify the client side directly, but always pass through the Server. The UI components of the framework follow these principles. Client-side modification could be performed, but must be made in a separated autonomous table/database. This will avoid any synchronization problem in case of concurrent client modification.
10.3. REST and JSON
10.3.1. JSON format density
Most common RESTful JSON used a verbose format for the JSON content: see for example http://bitworking.org/news/restful_json which proposed to put whole URI in the JSON content;
The REST implementation of the framework will return most concise JSON content, containing an array of objects:
[{"ID":1},{"ID":2},{"ID":3},{"ID":4}]
Depending on a setting, mORMot servers may in fact returns this alternative (see below non expanded format), which can be shorter, since it does not replicate field names:
{"fieldCount":1,"values":["ID",1,2,3,4,5,6,7]}
which preserves bandwidth and human readability: if you were able to send a GET request to the URI http://example.org/coll you will be able to append this URI at the beginning of every future request, doesn't it make sense?
In all cases, the Synopse mORMot Framework always returns the JSON content just as a pure response of a SQL query, with an array and field names.
10.3.2. JSON (not) expanded layouts
Note that our JSON content has two layouts, which can be produced according to the TSQLRestServer.NoAJAXJSON property:
1. the "expanded" or standard/AJAX layout, which allows you to create pure JavaScript objects from the JSON content, because the field name / JavaScript object property name is supplied for every value:
2. the "not expanded" layout, which reflects exactly the layout of the SQL request: first line/row are the field names, then all next lines.row are the field content:
By default, the NoAJAXJSON property is set to true when the TSQLRestServer. ExportServerNamedPipe is called: if you use named pipes for communication, you probably won't use a JavaScript client since all browsers communicate via HTTP only!
But otherwise, NoAJAXJSON property is set to false. You could force its value to true and you will save some bandwidth if JavaScript is never executed: even the parsing of the JSON Content will be faster with Delphi if JSON content is not expanded.
In this "not expanded" layout, the following JSON content:
A global cache, at SQlite3 level, is used to enhance the framework scaling, featuring JSON storage for its result encoding.
In order to speed-up the server response time, especially in a concurrent client access, the internal database engine is not to be called on every request. In fact, a global cache has been introduced to store in memory the latest SQL SELECT statements results, directly in JSON.
The SQLite3 engine access is protected at SQL/JSON cache level, via DB.LockJSON() calls in most TSQLRestServerDB methods.
A TSynCache instance is instantiated within the TSQLDataBase internal global instance, with the following line:
constructorTSQLRestServerDB.Create(aModel: TSQLModel; aDB: TSQLDataBase;
aHandleUserAuthentication: boolean);
begin
fStatementCache.Init(aDB.DB);
aDB.UseCache := true; // we better use caching in this JSON oriented use
(...)
This will enable a global JSON cache at the SQL level. This cache will be reset on every INSERT, UPDATE or DELETE SQL statement, whatever the corresponding table is.
If you need to disable the JSON cache for a particular request, add the SQLDATABASE_NOCACHE text, i.e. the '/*nocache*/' text comment, anywhere in the SQL statement, e.g. in the ORM WHERE clause. It will indicate to TSQLDataBase to not cache the returned JSON content. It may be usefull e.g. if you pass a pointer as PtrInt(aVariable) bound parameter, which may have the very same integer reference value, but diverse content.
In practice, this global cache was found to be efficient, even if its implementation is some kind of "naive". It is in fact much more tuned than other HTTP-level caching mechanisms used in most client-server solutions (using e.g. a Squid proxy) - since our caching is at the SQL level, it is shared among all CRUD / Restful queries, and is also indenpendent from the authentication scheme, which pollutes the URI. Associated with the other levels of cache - see ORM Cache - the framework scaling was found to be very good.
11. Client-Server process
Adopt a mORMot
11.1. Client-Server cheat sheet
Before deeping into the details, and presenting all the mORMot framework Client-Server abilities, let's step back, and look at the big picture.
In practice, for your project, you will have several possibilities to create a Client-Server system. ORM, SOA and MVC can all be accessed remotely, and it may not be easy to find out which method is preferred to implement, in the context of a production system.
Method
Best for
Beware
SOA Interfaces
RPC REST
RPC
SOA Methods
Full REST/HTTP
Verbose
MVC Web
Web site + AJAX
HTML-oriented
ORM REST
Tests or internal use
Security/design flaws
In a nutshell,
SOA Interfaces - see below - is the preferred way to build both public and private services: both client and server code will be defined from interface types, including sessions management, stubbing/mocking, documentation generation, and security features.
SOA Methods - see below - will open full access to REST/HTTP details of each request, so may be needed to conform to a more REST, less RPC implementation - but the client side will need to be written by hand, and the server side could be more verbose to implement.
MVC Web - see below - is the way to go if you expect to develop mostly dynamic web pages, and sometimes consume some JSON content from JavaScript if needed, by accessing its url/json sub path.
ORM REST - see below - exposes all data automatically, but should better not be used on production for public APIs for architecture and security reasons, since it is directly tied to the datastore. It could be exposed internally, or for debugging/testing.
remember that any combination of the four previous framework features could be defined in the same TSQLRestServer instance, so you can just pickup what fits best your needs.
We will now present all those communication features, but you may focus on SOA Interfaces, and its associated samples, when implementing your project, and go back to other details of this exhaustive documentation, only if needed.
11.2. Protocols
The mORMot framework can be used either stand-alone, or in a Client-Server model, via several communication layers:
Fast in-process access (an executable file using a common library, for instance);
Windows Messages, only locally on the same computer, which are very fast for small content;
Named pipes, which can be used locally between a Server running as a Windows service and some Client instances;
Abilities will depend on the protocol used. For instance, HTTP may sounds slower than alternatives, but it is the best protocol for remote access of concurrent clients, even running locally. For instance, mORMot's http.sys based server is able to serve 50,000 concurrent connections without any problem, but you should better not attempt connecting more than a dozen clients via named pipes or messages...
Here are some general information about available communication layers:
Note that you can have several protocols exposing the same TSQLRestServer instance. You may expose the same server over HTTP and over named pipes, at the same time, depending on your speed requirements.
11.3. TSQLRest classes
This architecture is implemented by a hierarchy of classes, implementing the RESTful pattern - see REST - for either stand-alone, client or server side, all inheriting from a TSQLRest common ancestor, as two main branches:
RESTful Client-Server classesAll ORM operations (aka CRUD process) are available from the abstract TSQLRest class definition, which is overridden to implement either a Server (via TSQLRestServer classes), or a Client (via TSQLRestClientURI classes) access to the data.
You should instantiate the classes corresponding to the needed transmission protocol, but should better rely on abstraction, i.e. implement your whole code logic relying on abstract TSQLRestClient / TSQLRestServer classes. It will then help changing from one protocol or configuration at runtime, depending on your customer's expectations.
11.3.1. Server classes
The following classes are available to implement a Server instance:
RESTful Server classesIn practice, in order to implement the business logic, you should better create a new class, inheriting from one of the above TSQLRestServer classes. Having your own inherited class does make sense, especially for implementing your own method-based services - see below, or override internal methods.
The TSQLRestServerDB class is the main kind of Server of the framework. It will host a SQLite3 engine, as its core Database layer.
If your purpose is not to have a full SQLite3 engine available, you may create your server from a TSQLRestServerFullMemory class instead of TSQLRestServerDB: this will implement a fast in-memory engine (using TSQLRestStorageInMemory instances), with basic CRUD features (for ORM), and persistence on disk as JSON or optimized binary files - this kind of server is enough to handle authentication, and host services in a stand-alone way.
If your services need to have access to a remote ORM server, it may use a TSQLRestServerRemoteDB class instead: this server will use an internal TSQLRestClient instance to handle all ORM operations - it can be used e.g. to host some services on a stand-alone server, with all ORM and data access retrieved from another server: it will allow to easily implement a proxy architecture (for instance, as a DMZ for publishing services, but letting ORM process stay out of scope). See below for some hosting scenarios. Another option may be to use TSQLRestClientRedirect - see below - which does something similar, but inheriting from TSQLRestClientURI.
11.3.2. Storage classes
In the mORMot units, you may also find those classes also inheriting from TSQLRestStorage:
RESTful storage classesIn the above class hierarchy, the TSQLRestStorage[InMemory][External] classes are in fact used to store some TSQLRecord tables in any non-SQL backend:
Those classes are used within a main TSQLRestServer to host some given TSQLRecord classes, either in-memory, or on external databases. They do not enter in account in our Client-Server presentation, but are implementation details, on the server side.
11.3.3. Client classes
A full set of client classes will implement a RESTful access to a remote database, with associated services and business logic:
RESTful Client classesOf course, all those TSQLRestClient* classes expect a TSQLRestServer to be available, via the corresponding transmission protocol.
11.4. In-process/stand-alone application
For a stand-alone application, create a TSQLRestClientDB. This particular class will initialize an internal TSQLRestServerDB instance, and you'll have full access to the SQLite3 database in the same process, with no speed penalty.
Content will still be converted to and from JSON, but there will be no delay due to the transmission of the data. Having JSON at hand will enable internal cache - see below - and allow to combine this in-process direct process with other transmission protocols (like named pipes or HTTP).
You may also directly work with a TSQLRestServerDB instance, but you will miss some handy features of the TSQLRestClientURI class, like User-Interface interaction, or advanced ORM/SOA abilities, based on TSQLRestServer.URI process.
11.5. Local access via named pipes or Windows messages
For a Client-Server local application, that is some executable running on the same physical machine, create a TSQLRestServerDB instance, then use the corresponding ExportServer, ExportServerNamedPipe, ExportServerMessage method to instantiate either a in-process, Named-Pipe or Windows Messages server.
The Windows Messages layer has the lowest overhead and is the fastest transport layer available between several applications on the same computer. But it has the problem of being reserved to desktop applications (since Windows Vista), so you a Windows Messages server won't be accessible when run as a background service.
A named pipe communication is able to be served from a Windows service, and is known to be more efficient when transmitting big messages. So it is the preferred mean of communication for a local application sharing data between clients.
Due to security restriction of newer versions of Windows (i.e. starting with Vista), named pipes are not available by default over a network. This is the reason why this protocol is listed as local access mean only.
11.6. Network and Internet access via HTTP
For publishing a server via HTTP/1.1 over TCP/IP, creates a TSQLHttpServer instance, and associate your running TSQLRestServerDB to it.
Typical initialization code, as extracted from sample "04 - HTTP Client-Server", may be:
Model := CreateSampleModel;
DBServer := TSQLRestServerDB.Create(Model,ChangeFileExt(paramstr(0),'.db3'),true);
DBServer.CreateMissingTables;
HttpServer := TSQLHttpServer.Create('8080',[DBServer],'+',HTTP_DEFAULT_MODE);
And you can optionally define some per domain / per sub-domain hosting redirection:
HttpServer.DomainHostRedirect('project.com','root'); // 'root' is current Model.Root
HttpServer.DomainHostRedirect('blog.project.com','root/blog'); // MVC application
In all cases, even if HTTP protocol is very network friendly (especially over the 80 port), you shall always acquire IT approval and advices before any deployment over a corporate network, at least to negotiate firewall settings.
TWebSocketServer which is a THttpServer server, able to upgrade to the WebSockets protocol for asynchronous and bidirectional callbacks - see below.
THttpServerGeneric classes hierarchy
On production, THttpApiServer seems to give the best results, and has a proven and secure implementation. It is also the only one class implementing HTTPS / SSL secure communication, if needed. That's why TSQLHttpServer will first try to use fastest http.sys kernel-mode server, then fall-back to the generic sockets-based THttpServer class in case of failure.
You can specify which kind of HTTP server class is to be used, via the aHttpServerKind: TSQLHttpServerOptions of the TSQLHttpServer.Create constructor. By default, it will be HTTP_DEFAULT_MODE (i.e. useHttpApi over Windows), but you may specify useHttpApiRegisteringURI for automatic registration of the URI - see below - or useHttpSocket to use the socket-based THttpServer, or useBidirSocket for TWebSocketServer.
The THttpServerGeneric abstract class provides one OnRequest property event, in which all high level process is to take place - it expects some input parameters, then will compute the output content to be sent as response:
This event handler prototype is shared by both TThread classes instances able to implement a HTTP/1.1 server.
Both THttpApiServer and THttpServer classes will receive any incoming request, pass it to the TSQLRestServer instance matching the incoming URI request, via the OnRequest event handler.
If the request is a remote ORM operation, a JSON response will be retrieved from the internal cache of the framework, or computed using the SQLite3 database engine. In case of a remote service access - see below - the request will be computed on the server side, also marshalling the data as JSON. If you specified useBidirSocket kind of server, you may use remote service access via interfaces, with asynchronous callbacks - see below.
The resulting JSON content will be compressed using our very optimized SynLZ algorithm (20 times faster than Zip/Deflate for compression), if the client is a Delphi application knowing about SynLZ - for an AJAX client, it won't be compressed by default (even if you can enable the deflate algorithm - which may slow down the server).
Then the response will be marked as to be sent back to the Client...
11.6.2. High-performance http.sys server
Since Windows XP SP2 and Windows Server 2003, the Operating System provides a kernel stack to handle HTTP requests. This http.sys driver is in fact a full featured HTTP server, running in kernel mode. It is part of the networking subsystem of the Windows operating system, as a core component.
The SynCrtSock unit can implement a HTTP server based on this component. Of course, the Synopse mORMot framework will use it. If it's not available, it will launch our pure Delphi optimized HTTP server, using I/O completion ports and a Thread Pool.
Whats good about https.sys?
Kernel-mode request queuing: Requests cause less overhead in context switching, because the kernel forwards requests directly to the correct worker process. If no worker process is available to accept a request, the kernel-mode request queue holds the request until a worker process picks it up.
Enhanced stability: When a worker process fails, service is not interrupted; the failure is undetectable by the user because the kernel queues the requests while the WWW service starts a new worker process for that application pool.
Faster process: Requests are processed faster because they are routed directly from the kernel to the appropriate user-mode worker process instead of being routed between two user-mode processes, i.e. the good old WinSock library and the worker process;
Embedded SSL process, when secure HTTPS communication is needed.
11.6.2.1. Use the http.sys server
Take a look at sample "04 - HTTP Client-Server", which is able to serve a SQLite3 database content over HTTP, using our RESTful ORM server. By default, it will try to use the http.sys server, then fall-back to plain socket server, in case of failure.
In fact, two steps are performed by the TSQLHttpServer constructor:
The HTTP Server API is first initialized (if needed) during THttpApiServer.Create constructor call. The HttpApi.dll library (which is the wrapper around http.sys) is loaded dynamically: so if you are running an old system (Windows XP SP1 for instance), you could still be able to use the server.
It then tries to register the URI matching the RESTful model - REST - via the THttpApiServer.AddUrl method. In short, the TSQLModel. Root property is used to compute the RESTful URI needed, just by the book. You can register several TSQLRestServer instances, each with its own TSQLModel. Root, if you need it.
As we already stated, if any of those two steps fails (e.g. if http.sys is not available, or if it was not possible to register the URLs), the TSQLHttpServer class will fall back into using the other THttpServer class, which is a plain Delphi multi-threaded server. It won't be said that we will let you down!
Inside http.sys all the magic is made... it will listen to any incoming connection request, then handle the headers, then check against any matching URL.
http.sys will handle all the communication by itself, leaving the server threads free to process the next request.
You can even use a special feature of http.sys to serve a file content as fast as possible. In fact, if you specify HTTP_RESP_STATICFILE as Ctxt.OutContentType, then Ctxt.OutContent is the UTF-8 file name of a file which must be sent to the client. Note that it will work only with THttpApiServer kind of server (i.e. using high performance http.sys API). But whole file access and sending will occur in background, at the kernel level, so with best performance. See sample "09 - HttpApi web server" and HttpApiServer.dpr file. If you use a TSQLHttpServer, the easiest is to define a method-based service - see below - and call Ctxt.ReturnFile() to return a file content from its name. We will see details about this below. Another possibility may be to override TSQLHttpServer.Request() method, as stated by Project04ServerStatic.dpr sample: but we think that a method-based service and Ctxt.ReturnFile() is preferred.
11.6.2.2. URI authorization as Administrator
This works fine under XP. Performances are very good, and stability is there. But... here comes the UAC nightmare again.
Security settings have changed since XP. Now only applications running with Administrator rights can register URLs to http.sys. That is, no real application. So the URI registration step will always fail with the default settings, under Vista and Seven.
The only case when authorization will be possible is when the application launched as a Windows Service, with default services execution user. By default, Windows services are launched with a User which has the Administrator rights.
11.6.2.2.1. Secure specific authorization
Standard security policy, as requested by Windows for all its http.sys based systems (i.e. IIS and WCF services) is to explicitly register the URI.
Depending on the system it runs on (i.e. Windows XP or Vista and up), a diverse command line tool is to be used. Can be confusing.
To keep it simple, our SynCrtSock unit provides a dedicated method to authorize a particular URI prefix to be registered by any user.
Therefore, a program can be easily created and called once with administrator rights to make http.sys work with our framework. This could be done, for instance, as part of your Setup program.
Then when your server application will be launched (for instance, as an application in tray icon with normal user rights, or a background Windows service with tuned user rights), it will be able to register all needed URL.
Here is a sample program which can be launched to allow our TestSQL3.dpr to work as expected - it will allow any connection via the 888 port, using TSQLModel. Root set as 'root'- that is, an URI prefix of http://+:888/root/ as expected by the kernel server:
program TestSQL3Register;
usesSynCrtSock,
SysUtils; // force elevation to Administrator under Vista/Seven{$R VistaAdm.res} beginTHttpApiServer.AddUrlAuthorize('root','888',false,'+'));end.
Take also a look at the Project04ServerRegister.dpr sample, in the context of a whole client/server RESTful solution over HTTP.
Note that you still need to open the IP port for incoming TCP traffic, in the Windows firewall, if you want your server to be accessible to the outer world, as usual.
11.6.2.2.2. Automatic authorization
An easier possibility could be to run the server application at least once as system Administrator.
The TSQLHttpServer.Create() constructor has a aHttpServerKind: TSQLHttpServerOptions parameter. By default, it will be set to useHttpApi. If you specify useHttpApiRegisteringURI, the class will register the URI before launching the server process.
All mORMot samples are compiled with this flag, as such:
Some THttpApiServer methods are available with the HTTP Server 2.0 API, provided since Windows Vista and Windows Server 2008:
Method
Description
HasAPI2
check if the HTTP API 2.0 is available
SetTimeOutLimits()
advanced timeout settings
LogStart() and LogStop
HTTP level standard logging
SetAuthenticationSchemes()
kernel-mode authentication
Please see the corresponding documentation of SynCrtSock.pas for further details, and https://msdn.microsoft.com/en-us/library/windows/desktop/aa364703 as low-level reference of these features. Note that our implementation of http.sys is more complete than the one currently included in the official .Net WCF framework. Not bad for a third-party library, isn't it?
11.6.3. HTTP client(s)
In fact, there are several implementation of a HTTP/1.1 clients, according to this class hierarchy:
HTTP/1.1 Client RESTful classesSo you can select either TSQLHttpClientWinSock, TSQLHttpClientWinINet or TSQLHttpClientWinHTTP for a HTTP/1.1 client, under Windows. By design, TSQLHttpClientWinINet or TSQLHttpClientWinHTTP are not available outside of Windows, but TSQLHttpClientCurl is a great option under Linux, if the libcurl library is installed, especially if you want to use HTTPS - it will call SynCurl.pas. The TSQLHttpClientWebsockets class has the ability to upgrade the HTTP connection to the WebSockets protocol, which will be used for dual ways callbacks - see below.
Each class has its own architecture, and attaches itself to a Windows communication library, all based on WinSock API. As stated by their name, TSQLHttpClientWinSock will call directly the WinSock API, TSQLHttpClientWinINet will call WinINet API (as used by IE 6) and TSQLHttpClientWinHTTP will cal the latest WinHTTP API:
WinSock is the common user-space API to access the sockets stack of Windows, i.e. IP connection - it's able to handle any IP protocol, including TCP/IP, UDP/IP, and any protocol over it (including HTTP);
WinINet was designed as an HTTP API client platform that allowed the use of interactive message dialogs such as entering user credentials - it's able to handle HTTP and FTP protocols;
WinHTTP's API set is geared towards a non-interactive environment allowing for use in service-based applications where no user interaction is required or needed, and is also much faster than WinINet - it only handles HTTP protocol.
HTTP/1.1 Client architecture
Here are some PROs and CONs of the available solutions, under Windows:
Criteria
WinSock
WinINet
WinHTTP
API Level
Low
High
Medium
Local speed
Fastest
Slow
Fast
Network speed
Slow
Medium
Fast
Minimum OS
Win95/98
Win95/98
Win2000
HTTPS
Not available
Available
Available
Integration with IE
None
Excellent (proxy)
Available (see below)
User interactivity
None
Excellent (authentication, dial-up)
None
As stated above, there is still a potential performance issue to use the direct TSQLHttpClientWinSock class over a network. It has been reported on our forum, and root cause was not identified yet.
Therefore, the TSQLHttpClient class maps by default to the TSQLHttpClientWinHTTP class. This is the recommended usage from a Delphi client application.
Note that even if WinHTTP does not share by default any proxy settings with Internet Explorer, it can import the current IE settings. The WinHTTP proxy configuration is set by either proxycfg.exe on Windows XP and Windows Server 2003 or earlier, or netsh.exe on Windows Vista and Windows Server 2008 or later; for instance, you can run "proxycfg -u" or "netsh winhttp import proxy source=ie" to use the current user's proxy settings for Internet Explorer. Under 64-bit Vista/Seven, to configure applications using the 32 bit WinHttp settings, call netsh or proxycfg bits from %SystemRoot%\SysWOW64 folder explicitly.
Note that by design, the TSQLHttpClient* classes, like other TSQLRestClientURI implementations, were designed to be thread safe, since their URI() method is protected by a lock. See below.
11.6.4. HTTPS server
The http.sys kernel mode server can be defined to serve HTTPS secure content, i.e. the SSL protocol over HTTP.
When the aHttpServerSecurity parameter is set to secSSL for the TSQLHttpServer.Create() constructor, the SSL layer will be enabled within http.sys. Note that useHttpSocket kind of server does not offer SSL/TLS encryption yet.
In order to let the SSL layer work as expected, you need first to create and import a set of certificates.
11.6.4.1. Certificates
You need one certificate (cert) to act as your root authority, and one to act as the actual certificate to be used for the SSL, which needs to be signed by your root authority. If you don't set up the root authority your single certificate won't be trusted, and you will start to discover this through a series of extremely annoying exceptions, long after the fact. To get a free certificate, i.e. for testing purposes, you may use an online service like http://www.startssl.com
Depending on the Windows revision you are using, you can run the Internet Information Services (IIS) Manager: from the Windows Start menu, click Administrative Tools > Internet Information Services (IIS) Manager. See http://support.microsoft.com/kb/299875
The following command (run in a Visual Studio command prompt) will create your root certificate:
makecert -sv SignRoot.pvk -cy authority -r signroot.cer -a
sha1 -n "CN=Dev Certification Authority" -ss my -sr localmachine
Take a look at the above links to see what each of these arguments mean, it isn't terribly important, but it's nice to know.
The MakeCert tool is available as part of the Windows SDK, which you can download from http://go.microsoft.com/fwlink/p/?linkid=84091 if you do not want to download the whole Visual Studio package. Membership in Administrators, or equivalent, on the local computer is the minimum required to complete this procedure.
Once this command has been run and succeeded, you need to make this certificate a trusted authority. You do this by using the MMC snap in console. Go to the run window and type "mmc", hit enter. Then in the window that opens (called the "Microsoft Management Console", for those who care) perform the following actions:
File -> Add/Remove Snap-in -> Add -> Double click Certificates -> Select Computer Account and Click Next -> Finish -> Close -> OK
Then select the Certificates (Local Computer) -> Personal -> Certificates node.
You should see a certificate called "Dev Certificate Authority" (or whatever else you decided to call it as parameter in the above command line). Move this certificate from the current node to Certificates (Local Computer) -> Trusted Root Certification Authorities -> Certificates node, drag and drop works happily.
Now you have NOT the cert you need :) You have made yourself able to create trusted certs though, which is nice. Now you have to create another cert, which you are actually going to use.
Note that you are using the first certificate as the author for this latest one. This is important... where I have localhost you need to put the DNS name of your box. In other words, if you deploy your service such that its endpoint reads http://bob:10010/Service then the name needs to be bob. In addition, you are going to need to do this for each host you need to run as (yes, so one for bob and another one for localhost).
Get the signature of your cert by double clicking on the cert (Select the Certificates (Local Computer) ' Personal ' Certificates), opening the details tab, and scrolling down to the "Thumbprint" option.
Select the thumbprint and copy it. Put it in Notepad or any other text editor and replace the spaces with nothing. Keep this thumbprint heaxdecimal value safe, since we will need it soon.
You have your certs set up. Congrats! But we are not finished yet.
11.6.4.2. Configure a Port with an SSL certificate
Now you get to use another fun tool, httpcfg (for XP/2003), or its newer version, named aka netsh http (for Vista/Seven/Eight).
Firstly run the command below to check that you don't have anything running on a port you want.
httpcfg query ssl
(under XP)
netsh http show sslcert
(under Vista/Seven/Eight)
If this is your first time doing this, it should just return a newline. If there is already SSL set up on the exact IP you want to use (or if later on you need to delete any mistakes) you can use the following command, where the IP and the port are displayed as a result from the previous query.
Now we have to bind an SSL certificate to a port number, as such (here below, 0000000000003ed9cd0c315bbb6dc1c08da5e6 is the thumbprint of the certificate, as you copied it into the notepad in the previous paragraph):
httpcfg set ssl -i 0.0.0.0:8012 -h 0000000000003ed9cd0c315bbb6dc1c08da5e6
It will enable SynLZ compression in the HTTP headers:
ACCEPT-ENCODING: synlz
Our SynLZ is efficient, especially on JSON content, and very fast on the server side. It will therefore use less resources than hcDeflate, so may be preferred when balancing the resource / concurrent client ratio.
You may include hcDeflate to the property, if you want to support this zip-derivated compression algorithm, e.g. from browsers or any HTTP library. In terms of CPU resources, hcDeflate will be more consumming than hcSynLZ, but will obtain a slightly better compression ratio.
If both [hcSynLZ,hcDeflate] are defined, mORMot clients will use SynLZ compression, while other clients (e.g. browsers which do not know about the SynLZ encoding), will use the standard deflate compression.
11.6.5.2. AES encryption over HTTP
In addition to regular HTTPS flow encryption, which is not easy to setup due to the needed certificates, mORMot proposes a proprietary encryption scheme. It is based on SHA256 and AES256-CFB algorithms, so is known to be secure. You do not need to setup anything on the server or the client configuration, just run the TSQLHttpClient and TSQLHttpServer classes with the corresponding parameters.
Note that this encryption uses a global key for the whole process, which should match on both Server and Client sides. You should better hard-code this public key in your Client and Server Delphi applications, with some variants depending on each end-user service. You can use CompressShaAesSetKey() as defined in SynCrypto.pas to set globally this Encryption Key, and an optional Initialization Vector. You can even customize the AES chaining mode, if the default TAESCFB mode is not what you expect.
When the aHttpServerSecurity parameter is set to secSynShaAes for the TSQLHttpServer.Create() constructor, this proprietary encryption will be enabled on the server side. For instance:
Once those parameters have been set, a new proprietary encoding will be defined in the HTTP headers:
ACCEPT-ENCODING: synshaaes
Then all HTTP body content will be compressed via our SynLZ algorithm, and encoded using the very secure AES256-CFB scheme. On both client and server side, this encryption will use AES-NI hardware instructions, if available on the CPU it runs on. It ensures that security is enhanced not at the price of performance and scalability.
Since it is a proprietary algorithm, it will work only for Delphi clients. When accessing for a plain AJAX client, or a Delphi application with TSQLHttpClientGeneric.Compression = [], there won't be any encryption at all, due to way HTTP accepts its encoding. For safety, you should therefore use it in conjunction with per-URI Authentication - see below.
11.6.5.3. Prefer WebSockets between mORMot nodes
As we just saw, defining hcSynShaAes will only be available between mORMot nodes, if both do support the encoding. There is no insurance that content will be encrypted during transmission, e.g. if the client did not define synshaaes.
Therefore, for truly safe communication between mORMot nodes, you may consider our WebSockets client/server implementation instead - see below. It implements a proprietary binary protocol for its communication frames, using also SynLZ compression and AES256-CFB encryption. And, last but not least, it features real-time callbacks, if needed. This kind of access may in fact be considered as the safest available mean of remote connection to a mORMot server, from stable mORMot clients, e.g. in a mORMotCloud. Then RESTful (AJAX/mobile) clients, may rely on plain HTTP, with hcDeflate compression.
11.7. Thread-safety
We tried to make mORMot at the same time fast and safe, and able to scale with the best possible performance on the hardware it runs on. Multi-threading is the key to better usage of modern multi-core CPUs, and also client responsiveness.
As a result, on the Server side, our framework was designed to be thread-safe.
On typical production use, the mORMot HTTP server - see JSON RESTful Client-Server - will run on its own optimized thread pool, then call the TSQLRestServer.URI method. This method is therefore expected to be thread-safe, e.g. from the TSQLHttpServer. Request method. Thanks to the RESTful approach of our framework, this method is the only one which is expected to be thread-safe, since it is the single entry point of the whole server. This KISS design ensure better test coverage.
On the Client side, all TSQLRestClientURI classes are protected by a global mutex (Critical Sections), so are thread-safe. As a result, a single TSQLHttpClient instance can be shared among several threads, even if you may also use one client per thread, as is done with sample 21 - see below, for better responsiveness.
11.7.1. Thread safe design
We will now focus on the server side, which is the main strategic point (and potential bottleneck or point of failure) of any Client-Server architecture.
In order to achieve this thread-safety without sacrificing performance, the following rules were applied in TSQLRestServer.URI:
Most of this method's logic is to process the URI and parameters of the incoming request (in TSQLRestServerURIContext.URIDecode* methods), so is thread-safe by design (e.g. Model and RecordProps access do not change during process);
At RESTful / CRUD level, Add/Update/Delete/TransactionBegin/Commit/Rollback methods are locked by default (with a 2 seconds timeout), and Retrieve* methods are not;
TSQLRestStorage main methods (EngineList, EngineRetrieve, EngineAdd, EngineUpdate, EngineDelete, EngineRetrieveBlob, EngineUpdateBlob) are thread-safe: e.g. TSQLRestStorageInMemory uses a per-Table Critical Section;
TSQLRestServerCallBack method-based services - i.e. published methods of the inherited TSQLRestServer class as stated below - must be implemented to be thread-safe by default;
Interface-based services - see below - have several execution modes, including thread safe automated options (see TServiceMethodOption) or manual thread safety expectation, for better scaling - see below;
A protected fSessionCriticalSection is used to protect shared fSession[] access between clients;
The SQLite3 engine access is protected at SQL/JSON cache level, via DB.LockJSON() calls in TSQLRestServerDB methods;
Remote external tables - see External SQL database access - use thread-safe connections and statements when accessing the databases via SQL;
Access to fStats was not made thread-safe, since this data is indicative only: a mutex was not used to protect this resource.
We tried to make the internal Critical Sections as short as possible, or relative to a table only (e.g. for TSQLRestStorageInMemory).
At SQLite3 engine level, there is some kind of "giant lock", so all TSQLDatabase requests process will be queued. This induces only a slight performance penalty - see Data access benchmark - since the internal SQL/JSON cache implementation needs such a global lock, and since most of the SQLite3 resource use will consist in disk access, which gains to be queued. It also allows to use the SQLite3 engine in lmExclusive locking mode if needed - see ACID and speed - with both benefits of high performance and multi-thread friendliness.
From the Client-side, the REST core of the framework is expected to be Client-safe by design, therefore perfectly thread-safe: it is one benefit of the stateless architecture.
11.7.2. Advanced threading settings
You can use TSQLRestServerURI.AcquireExecutionMode[] property to refine the server-side threading mode. When amLocked is set, you can also set the AcquireExecutionLockedTimeOut[] property to specify a wait time to acquire the lock.
The default threading behavior is the following:
Command
Description
Default
execSOAByMethod
for method-based services
amUnlocked
execSOAByInterface
for interface-based services
amUnlocked
execORMGet
for ORM reads i.e. Retrieve* methods
amUnlocked
execORMWrite
for ORM writes i.e. Add Update Delete TransactionBegin Commit Rollback methods
amLocked + timeout of 2000 ms
On need, you can change those settings, to define a particular execution scheme. For instance, some external databases (like MS SQL) expect any transaction to be executed within the same connection, so in the same thread context for SynOleDB.pas, since it uses a per-thread connection pool. When the server is remotely access via HTTP, the incoming requests will be executed from any thread of the HTTP server thread pool. As a result, you won't be able to manage a transaction over MS SQL from the client-side with the default settings. To fix it, you can ensure all ORM write operations will be executed in a dedicated background thread, by setting either:
aServer.AcquireExecutionMode[execORMWrite] := amBackgroundThread;
aServer.AcquireWriteMode := amBackgroundThread; // same as previous
The same level of thread-safety can be defined for all kind of commands, even if you should better know what you are doing when changing the default settings, since it may create some giant locks on the server side, therefore voiding any attempt to performance scaling via multi-threading - which is what mORMot excels in.
At ORM level, with external databases, your mORMot server may suffer from broken connection to the remote database. To avoid this, you may use ConnectionTimeOutMinutes property to specify a maximum period of inactivity after which all connections will be flushed and recreated, to avoid potential broken connections issues. In this case, you should ensure that all ORM process is blocked so that clearing the connection pool won't break anything in your multi-threaded server. As such, you may set a blocking mode for both execORMGet and execORMWrite, for instance:
The above commands will create one thread for all read operations (execORMGet), and another thread for all write operations (execORMWrite). If you want all database access to take place in a single thread, for both read and write operations, you could write:
For instance, this sounds mandatory when using Jet/MSAccess as external database, since its implementation seems not thread-safe: if you write in one thread, then read immedially in another thread, the Jet engine is not able to find the just written data from the 2nd thread. This is clearly a bug of the Jet engine - but setting amBackgroundORMSharedThread option to circumvent the issue.
During any ORM or SOA process, you can access the current execution context from the ServiceContext threadvar variable, as stated below. For instance, you can retrieve the current logged user, or its session ID.
In practice, execSOAByMethod may benefit of a per-method locking, execSOAByInterface of using its own execution options - see below, and execORMGet to be let unlocked to allow concurrent reads of all connected clients.
11.7.3. Proven behavior
When we are talking about thread-safety, nothing compares to a dedicated stress test program. An average human brain (like ours) is not good enough to ensure proper design of such a complex process. So we have to prove the abilities of our little mORMot.
In the supplied regression tests, we designed a whole class of multi-thread testing, named TTestMultiThreadProcess. Its methods will run every and each Client-Server protocols available (direct access via TSQLRestServerDB or TSQLRestCLientDB, Windows Messages, named pipes, and both HTTP servers - i.e. http.sys based or WinSock-based)- see Client-Server process.
Each protocol will execute in parallel a list of INSERTs - i.e. TSQLRest.Add() - followed by a list of SELECTs - i.e. TSQLRest.Retrieve(). Those requests will be performed in 1 thread, then 2, 5, 10, 30 and 50 concurrent threads. The very same SQLite3 database (in lmExclusive locking mode) is accessed at once by all those clients. Then the IDs generated by each thread are compared together, to ensure no cross-insertion did occur during the process.
Those automated tests did already reveal some issues in the initial implementation of the framework. We fixed any encountered problems, as soon as possible. Feel free to send us any feedback, with code to reproduce the issue: but do not forget that multi-threading is also difficult to test - problems may occur not in the framework, but in the testing code itself!
When setting OperationCount to 1000 instead of the default 200, i.e. running 1000 INSERTions and 1000 SELECTs in concurrent threads, the numbers are the following, on the local machine (compiled with Delphi XE4):
For direct in-process access, TSQLRestClientDB sounds the best candidate: its abstraction layer is very thin, and much more multi-thread friendly than straight TSQLRestServerDB calls. It also will feature a cache, on need - see ORM Cache. And it will allow your code to switch between TSQLRestClientURI kind of classes, from its shared abstract methods.
Named pipes and Windows Messages are a bit constrained in highly parallel mode, but HTTP does pretty good. The server based on http.sys (HTTP API) is even impressive: the more clients, the more responsive it is. It is known to scale much better than the WinSock-based class supplied, which shines with one unique local client (i.e. in the context of those in-process regression tests), but sounds less reliable on production.
11.7.4. Highly concurrent clients performance
In addition, you can make yourself an idea, and run the "21 - HTTP Client-Server performance" sample programs, locally or over a network, to check the mORMot abilities to scale and serve a lot of clients with as few resources as possible.
Compile both client and server projects, then launch Project21HttpServer.exe. The server side will execute as a console window.
This Server will define the same TSQLRecordPeople as used during our multi-thread regression tests, that is:
aModel := TSQLModel.Create([TSQLRecordPeople]);try
aDatabaseFile := ChangeFileExt(paramstr(0),'.db3');
DeleteFile(aDatabaseFile);
aServer := TSQLRestServerDB.Create(aModel,aDatabaseFile);tryaServer.DB.Synchronous := smOff;aServer.DB.LockingMode := lmExclusive;
aServer.NoAJAXJSON := true;
aServer.CreateMissingTables;
// launch the serveraHTTPServer := TSQLHttpServer.Create('888',[aServer]);try
writeln(#13#10'Background server is running at http://localhost:888'#13#10+
#13#10'Press [Enter] to close the server.');
ConsoleWaitForEnterKey;
finally
aHTTPServer.Free;
end;
finally
aServer.Free;
end;
finally
aModel.Free;
end;
Following the Model-View-Controller pattern, aServer will give remote CRUD access to the TSQLRecordPeople table (as defined in aModel), from HTTP. We defined Synchronous := smOff and LockingMode := lmExclusive to have the best performance possible, as stated by ACID and speed. Our purpose here is not to have true ACID behavior, but test concurrent remote access.
The Client is just a RAD form which will execute the very same code than during the regression tests, i.e. a TTestMultiThreadProcess class instance, as shown by the following code:
Tests := TSynTestsLogged.Create;
Test := TTestMultiThreadProcess.Create(Tests);tryTest.ClientOnlyServerIP := StringToAnsi7(lbledtServerAddress.Text);Test.MinThreads := ThreadCount;Test.MaxThreads := ThreadCount;Test.OperationCount := OperationCount;Test.ClientPerThread := ClientPerThread;Test.CreateThreadPool;
txt := Format
('%s'#13#10#13#10'Test started with %d threads, %d client(s) per thread and %d rows to be inserted...',
[txt,ThreadCount,ClientPerThread,OperationCount]);
mmoInfo.Text := txt;
Timer.Start;
Test._TSQLHttpClientWinHTTP_HTTPAPI;
txt := mmoInfo.Text+Format(#13#10'Assertion(s) failed: %d / %d'+
#13#10'Number of clients connected at once: %d'+
#13#10'Time to process: %s'#13#10'Operation per second: %d',
[Test.AssertionsFailed,Test.Assertions,
ThreadCount*ClientPerThread,Timer.Stop,Timer.PerSec(OperationCount*2)]);
mmoInfo.Text := txt;
finally
Test.Free;
Tests.Free;
end;
Each thread of the thread pool will create its own HTTP connection, then loop to insert (Add ORM method) and retrieve (Retrieve ORM method) a fixed number of objects - checking that the retrieved object fields match the inserted values. Then all generated IDs of all threads are checked for consistency, to ensure no race condition did occur.
When running over the following hardware configuration:
Server is a Core i7 Notebook, with SSD, under Windows 7;
Client is a Core 2 Duo Workstation, with regular hard-drive (not used), under Windows 7;
Communicating over a somewhat slow 100 Mb network with a low priced Ethernet HUB.
Typical results are the following:
Threads
Clients/thread
Rows inserted
Total Clients
Time (sec)
Op/sec
1
1
10000
1
15.78
1267
50
1
10000
50
2.96
6737
100
1
10000
100
3.09
6462
100
1
20000
100
6.19
6459
50
2
100000
100
34.99
5714
100
2
100000
200
36.56
5469
500
100
100000
50000
92.92
2152
During all tests, no assertion failed, meaning that no concurrency problem did occur, nor any remote command lost. The SQlite3 core, exposes via the mORMot server, outputs data at an amazing pace of 6000 op/sec - i.e. comparable to most high-end databases. It is worth noting that when run several times in a row, the same set of input parameters give the very same speed results: it indicates that the architecture is pretty stable and could be considered as safe. The system is even able to serve 50000 connected clients at once, with no data loss - in this case, performance is lower (2152 insert/second in the above table), but we clearly reached the CPU and network limit of our client hardware configuration; in the meanwhile, server CPU resources on the Notebook server did have still some potential, and RAM consumption was pretty slow.
Average performance is pretty good, even more if we consider that we are inserting one object per request, with no transaction. In fact, it sounds like if our little SQLite3 server is faster than most database servers, even when accessed in highly concurrent mode! In batch mode - see below - we may achieve amazing results.
Feel free to send your own benchmark results and feedback, e.g. with concurrent clients on several workstations, or long-running tests, on our forums.
12. Client-Server ORM
Adopt a mORMotAs stated above, all ORM features can be accessible either stand-alone, or remotely via some dedicated Client-Server process.
That is, CRUD operations can be executed either at the database level, or remotely, from the same methods defined in TSQLRest abstract class.
This feature has several benefits, among them:
No need to deploy the database client library for your application clients - a standard IP network connection is enough;
Therefore the client application can safely remain small, and stand-alone - no installation step is necessary, and you still have the full power of a native rich client;
Clients access their objects in an abstract way, i.e. without any guess on how persistence is handled: some classes may be stored in one SQlite3 database, others may exist only in server's memory, others may be stored e.g. in an external Oracle, Firebird, PostgreSQL, MySQL, DB2, Informix or MS SQL database;
You can switch from local to remote access just by changing the class type, even at runtime;
Optimization is implemented at every level of the n-Tier architecture, e.g. cache or security.
12.1. ORM as local or remote
Typical Client-Server RESTful POST / Add request over HTTP/1.1 will be implemented as such, on both Client and Server side:
Client-Server implementation - Client sideClient-Server implementation - Server sideOf course, several clients can access to the same server.
The same server is also able to publish its RESTful services over several communication protocol at once, e.g. HTTP/1.1 for remote access over a network (either corporate or the Internet), named pipes or Windows Messages for fast local access.
The above diagram describes a direct INSERT into the Server's main SQLite3 engine, but other database back-ends are available - see Database layer.
It is possible to by-pass the whole Client-Server architecture, and let the application be stand-alone, by defining a TSQLRestClientDB class, which will embed a TSQLRestServerDB instance in the same executable:
Client-Server implementation - Stand-Alone applicationIn fact, the same executable could be launched as server, as stand-alone application, or even client application! It is just a matter of how you initialize your TSQLRest classes instances - see Client-Server process. Some mORMot users use this feature to ease deployment, support and configuration. It can be also extremely useful at debugging time, since you may run the server and client side of your project at once within the same application, from the IDE.
In case of a Virtual Table use (either in-memory or for accessing an external database), the client side remains identical. Only the server side is modified as was specified by External database ORM internals:
Client-Server implementation - Server side with Virtual TablesIn fact, the above function correspond to a database model with only external virtual tables, and with StaticVirtualTableDirect=false, i.e. calling the Virtual Table mechanism of SQlite3 for each request.
But most of the time, i.e. for RESTful / CRUD commands, the execution is more direct:
Client-Server implementation - Server side with "static" Virtual TablesAs stated in External SQL database access, the static TSQLRestStorageExternal instance is called for most RESTful access. In practice, this design will induce no speed penalty, when compared to a direct database access. It could be even faster, if the server is located on the same computer than the database: in this case, use of JSON and REST could be faster - even faster when using below.
In order to be exhaustive, here is a more complete diagram, showing how native SQLite3, in-memory or external tables are handled on the server side. You'll find out how CRUD statements are handled directly for better speed, whereas any SQL JOIN query can also be processed among all kind of tables.
Client-Server implementation - Server sideYou will find out some speed numbers resulting from this unique architecture in the supplied Data access benchmark.
12.2. Stateless design
12.2.1. Server side synchronization
Even if Stateless ORM, it's always necessary to have some event triggered on the server side when a record is edited.
On the server side, you can use this method prototype:
type/// used to define how to trigger Events on record update// - see TSQLRestServer.OnUpdateEvent property// - returns true on success, false if an error occured (but action must continue)TNotifySQLEvent = function(Sender: TSQLRestServer; Event: TSQLEvent;
aTable: TSQLRecordClass; aID: TID): boolean ofobject; TSQLRestServer = class(TSQLRest)
(...)
/// a method can be specified here to trigger events after any table update
OnUpdateEvent: TNotifySQLEvent;
12.2.2. Client side synchronization
But if you want all clients to be notified from any update, there is no direct way of broadcasting some event from the server to all clients.
It's not even technically possible with pipe-oriented transport layer, like named pipes or the TCP/IP - HTTP protocol.
What you can do easily, and is what should be used in such case, is to have a timer in your client applications which will call TSQLRestClientURI. UpdateFromServer() method to refresh the content of any TSQLRecord or TSQLTableJSON instance:
/// check if the data may have changed of the server for this objects, and// update it if possible// - only working types are TSQLTableJSON and TSQLRecord descendants// - make use of the InternalState function to check the data content revision// - return true if Data is updated successfully, or false on any error// during data retrieval from server (e.g. if the TSQLRecord has been deleted)// - if Data contains only one TSQLTableJSON, PCurrentRow can point to the// current selected row of this table, in order to refresh its valuefunction UpdateFromServer(const Data: arrayof TObject; out Refreshed: boolean;
PCurrentRow: PInteger = nil): boolean;
With a per-second timer, it's quick and reactive, even over a remote network.
The stateless aspect of REST allows this approach to be safe, by design.
This is handled natively by our Client User Interface classes, with the following parameter defining the User interface:
/// defines the settings for a TabTSQLRibbonTabParameters = object
(...)
/// by default, the screens are not refreshed automaticaly// - but you can enable the auto-refresh feature by setting this// property to TRUE, and creating a WM_TIMER timer to the form
AutoRefresh: boolean;
This parameter will work only if you handle the WM_TIMER message in your main application form, and call Ribbon.WMRefreshTimer.
See for example this method in the main demo (FileMain.pas unit):
procedureTMainForm.WMRefreshTimer(var Msg: TWMTimer);
begin
Ribbon.WMRefreshTimer(Msg);
end;
In a multi-threaded client application, and even on the server side, a stateless approach makes writing software easier. You do not have to care about forcing data refresh in your client screens. It's up to the screens to get refreshed. In practice, I found it very convenient to rely on a timer instead of calling the somewhat "delicate" TThread. Synchronize method.
12.2.3. Let applications be responsive
All the client communication is executed by default in the current thread, i.e. the main thread for a typical GUI application.
Since all communication is performed in blocking mode, if the remote request takes long to process (due to a bad/slow network, or a long server-side action), the application may become unresponsive, from the end-user experience. Even Windows may be complaining about a "non responsive application", and may propose to kill the process, which is far away from an expected behavior.
In order to properly interacts with the user, a OnIdle property has been defined in TSQLRestClientURI, and will change the way communication is handled. If a callback event is defined, all client communication will be processed in a background thread, and the current thread (probably the main UI thread) will wait for the request to be performed in the background, running the OnIdle callback in loop in the while.
You can find in the mORMotUILogin unit two methods matching this callback signature:
The first OnIdleProcess() callback will change the mouse cursor shape to crHourClass after a defined period of time. The OnIdleProcessForm() callback won't only change the mouse cursor, but also display a pop-up window with a 'Please wait...' message, if the request takes even more time. Both will call Application.ProcessMessages to ensure the application User Interface is still responsive.
Some global variable were also defined to tune the behavior of those two callbacks:
var/// define when TLoginForm.OnIdleProcess() has to display the crHourGlass cursor// after a given time elapsed, in milliseconds// - default is 100 msOnIdleProcessCursorChangeTimeout: integer = 100; /// define when TLoginForm.OnIdleProcessForm() has to display the temporary// form after a given time elapsed, in milliseconds// - default is 2000 ms, i.e. 2 secondsOnIdleProcessTemporaryFormTimeout: integer = 2000; /// define the message text displayed by TLoginForm.OnIdleProcessForm()// - default is sOnIdleProcessFormMessage resourcestring, i.e. 'Please wait...'OnIdleProcessTemporaryFormMessage: string;
You can therefore change those settings to customize the user experience. We tested it with a 3 second artificial temporizer for each request, and the applications were running smoothly, even if slowly - but comparable to most Web Applications, in fact. The SynFile main demo (available in the SQlite3\Samples\MainDemo folder) defines such a callback.
Note that this OnIdle feature is defined at TSQLRestClientURI class level, so is available for all communication protocols, not only HTTP but named pipes or in-process, so could be used to enhance user experience in case of some time consuming process.
12.3. BATCH sequences for adding/updating/deleting records
12.3.1. BATCH process
When use the so-called BATCH sequences?
In a standard Client-Server architecture, especially with the common understanding (and most implementations) of a RESTful service, any Add / Update / Delete method call requires a back and forth flow to then from the remote server. A so-called round-trip occurs: a message is sent to the client, the a response is sent back to the client.
In case of a remote connection via the Internet (or a slow network), you could have up to 100 ms of latency: it's just the "ping" timing, i.e. the time spent for your IP packet to go to the server, then back to you.
If you are making a number of such calls (e.g. add 1000 records), you'll have 100*1000 ms = 100 s = 1:40 min just because of this network latency!
BATCH mode Client-Server latencyThe BATCH sequence allows you to regroup those statements into just ONE remote call. Internally, it builds a JSON stream, then post this stream at once to the server. Then the server answers at once, after having performed all the modifications.
Some new TSQLRestClientURI methods have been added to implement BATCH sequences to speed up database modifications: after a call to BatchStart, database modification statements are added to the sequence via BatchAdd / BatchUpdate / BatchDelete, then all statements are sent as one to the remote server via BatchSend - this is MUCH faster than individual calls to Add / Update / Delete in case of a slow remote connection (typically HTTP over Internet).
Since the statements are performed at once, you can't receive the result (e.g. the ID of the added row) on the same time as you append the request to the BATCH sequence. So you'll have to wait for the BatchSend method to retrieve all results, at once, in a dynamicarray of TID.
As you may guess, it's also a good idea to use a transaction for the whole process. By default, the BATCH sequence is not embedded into a transaction.
You have two possibilities to add a transaction:
Either let the caller use an explicit TransactionBegin ... try... Commit except RollBack block;
Or specify a number of rows as AutomaticTransactionPerRow parameter to BatchStart(): in this case, a transaction will be emitted (up to the specified number of rows) on the server side. You can just set maxInt if you want all rows to be modified in a single transaction.
This second method is preferred, since defining transactions from the client side is not a good idea: it may block other clients attempts to create their own transaction.
Here is typical use (extracted from the regression tests in SynSelfTests.pas:
// start the BATCH sequenceCheck(ClientDist.BatchStart(TSQLRecordPeople,1000));// now a transaction will be created by chunk of 1000 modifications// delete some elementsfor i := 0 to n-1 doCheck(ClientDist.BatchDelete(IntArray[i])=i);// update some elements
nupd := 0;
for i := 0 to aStatic.Count-1 doif i and 7<>0 thenbegin// not yet deleted in BATCH mode
Check(ClientDist.Retrieve(aStatic.ID[i],V));
V.YearOfBirth := 1800+nupd;
Check(ClientDist.BatchUpdate(V)=nupd+n);
inc(nupd);
end;
// add some elements
V.LastName := 'New';
for i := 0 to 1000 dobegin
V.FirstName := RandomUTF8(10);
V.YearOfBirth := i+1000;
Check(ClientDist.BatchAdd(V,true)=n+nupd+i);end;
// send the BATCH sequences to the serverCheck(ClientDist.BatchSend(Results)=200);// now all data has been commited on the server// now Results[] contains the results of every BATCH statement...
Check(Length(Results)=n+nupd+1001);
// Results[0] to Results[n-1] should be 200 = deletion OK// Results[n] to Results[n+nupd-1] should be 200 = update OK// Results[n+nupd] to Results[high(Results)] are the IDs of each added recordfor i := 0 to high(Results) doif i<nupd+n then
Check(Results[i]=200) elsebegin
Check(Results[i]>0);
ndx := aStatic.IDToIndex(Results[i]);
Check(ndx>=0);
withTSQLRecordPeople(aStatic.Items[ndx]) dobegin
Check(LastName='New','BatchAdd');
Check(YearOfBirth=1000+i-nupd-n);
end;
end;
// check ClientDist.BatchDelete(IntArray[i]) did erase the recordfor i := 0 to n-1 do
Check(not ClientDist.Retrieve(IntArray[i],V),'BatchDelete');
In the above code, all CRUD operations are performed as usual, using BatchAdd BatchDelete BatchUpdate methods instead of plain Add Delete Update methods. The ORM will take care of all the low-level data process, including JSON serialization, automatic per-chunk transactions creation, and SQL statements generation, with several optimizations - see below and below.
In the above example, we started the batch process involving only TSQLRecordPeople kind of objects:
But you could mix any kind of TSQLRecord content, if you set the class to nil, as such:
Check(ClientDist.BatchStart(nil,1000));
or use the BatchStartAny() method:
Check(ClientDist.BatchStartAny(1000));
In practice, you should better create and maintain your own instance of TSQLRestBatch, so that you will be able to implement any number of simultaneous batch process - see below.
12.3.2. Transmitted JSON
As described above, all Batch*() methods do serialize the objects values as JSON on the client side, then send this JSON at once to the server, where it will be processed without any client-server round-trip and slow latency.
Here is some extract of typical JSON stream as sent to the server:
{"People":["DELETE",2,"DELETE",13,"DELETE",24,
(...) all DELETE actions
,"DELETE",11010,
"PUT",{"RowID":3,"FirstName":"Sergei1","LastName":"Rachmaninoff","YearOfBirth":1800, "YearOfDeath":1943},
"PUT",{"RowID":4,"FirstName":"Alexandre1","LastName":"Dumas","YearOfBirth":1801, "YearOfDeath":1870},
(...) all PUT = update actions
"PUT",{"RowID":11012,"FirstName":"Leonard","LastName":"da Vinçi","YearOfBirth":9025, "YearOfDeath":1519},
"POST",{"FirstName":"â@â¢Å"Hâ m£ g","LastName":"New","YearOfBirth":1000, "YearOfDeath":1519},
"POST",{"FirstName":"@â¦,KA½à #¶f","LastName":"New","YearOfBirth":1001, "YearOfDeath":1519},
(...) all POST = add actions
"POST",{"FirstName":"+ÂtqCXW3Ã\"","LastName":"New","YearOfBirth":2000, "YearOfDeath":1519}
]}
If BatchAdd implies only simple fields (which is the default), those fields name won't be transmitted, and the following will be emitted in the JSON stream, to reduce needed bandwith:
"SIMPLE",["â@â¢Å"Hâ m£ g","New",1000,1519],
By default, BLOB fields are excluded from the Batch content: only simple fields are send. But BatchAdd BatchUpdate methods (or corresponding TSQLRestBatchAdd or Update methods) could contain a custom list of fields to be transmitted, in which you could specify any TSQLRawBlob field: the binary BLOB content will be encoded as Base64 within the JSON process, and you may definitively gain some resource and speed in such case. Of course, all the data should be small enough to be stored in memory, so the BLOB fields should better be up to some dozen of MB - use several Batch instances in a loop, if you have a huge set of data.
On success, the following JSON stream will be received from the server:
[200,200,...]
This array of results is either the HTTP status codes (here 200 means OK), or the inserted new ID (for a BatchAdd command).
All the JSON generation (client-side) and parsing (server-side) has been optimized to minimize the resource needed. With the new internal SynLZ compression (available by default in our HTTP Client-Server classes), used bandwidth is minimal.
Thanks to this BATCH process, most time is now spent into the database engine itself, and not in the communication layer.
12.3.3. Unit Of Work pattern
12.3.3.1. Several Batches
On the TSQLRestClientURI side, all BatchStart/BatchAdd/BatchUpdate/BatchDelete methods are using a single temporary storage during the BATCH preparation. This may be safe only if one single thread is accessing the methods - which is usually the case for a REST Client application.
In fact, all BATCH process is using a TSQLRestBatch class, which can be created on the fly, and safely coexist as multiple instances for the same TSQLRest. As a result, you can create your own local TSQLRestBatch instances, for safe batch process. This is in fact mandatory on the TSQLRestServer side, which do not have the Batch*() methods, since they will not be thread safe.
The ability to handle several TSQLRestBatch classes in the same time will allow to implement the Unit Of Work pattern. It can be used to maintain a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems, especially in a complex SOA application with a huge number of connected clients.
In a way, you can think of the Unit of Work as a place to dump all transaction-handling code. The responsibilities of the Unit of Work are to:
Manage transactions;
Order the database inserts, deletes, and updates;
Prevent concurrency problems;
Group requests to maximize the database performance.
The value of using a Unit of Work pattern is to free the rest of your code from these concerns so that you can otherwise concentrate on business logic.
12.3.3.2. Updating only the mapped fields
In practice, the BatchUpdate method will only update the mapped fields if called on a record in which a FillPrepare was performed, and not unmapped (i.e. with no call to FillClose). This is required for coherency of the retrieval/modification process.
For instance, in the following code, V.FillPrepare will retrieve only ID and YearOfBirth fields of the TSQLRecordPeople table, so subsequent BatchUpdate(V) calls will only update the YearOfBirth field:
// test BATCH update from partial FillPrepareV.FillPrepare(ClientDist,'LastName=:("New"):','ID,YearOfBirth');if ClientDist.TransactionBegin(TSQLRecordPeople) thentry
Check(ClientDist.BatchStart(TSQLRecordPeople));
n := 0;
V.LastName := 'NotTransmitted';
while V.FillOne dobegin
Check(V.LastName='NotTransmitted');
Check(V.YearOfBirth=n+1000);
V.YearOfBirth := n;
ClientDist.BatchUpdate(V); // will update only V.YearOfBirth
inc(n);
end;
(...)
The transmitted JSON will be computed as such on the client side:
....,"PUT",{"RowID":324,"YearOfBirth":1000},...
And the generated SQL on the server side will be:
UPDATE People SET YearOfBirth=? WHERE RowID=?
... with bound parameters: [1000,324]
As a result, BATCH process could be seen as a good way of implementing Unit Of Work for your business layer - see below. You will be able to modify all your objects as requested, with high-level OOP methods, then have all data transmitted and processed at once when BatchSend() is called. The BatchStart - BatchSend - BatchAbort commands will induce a safe transactional model, relying on the client side for tracking the object modifications, and optimizing the database process on the server side as a simple "save and forget" task, to any SQL or NoSQL engine.
Note that if several ClientDist.BatchUpdate(V) commands are executed within the same FillPrepare() context, they will contain the same fields (RowID and YearOfBirth). They will therefore generate the same statement (UPDATE People SET YearOfBirth=? WHERE RowID=?), which will benefit of Array Binding on the database side - see below - if available.
Here is some code, extracted from "web blog" sample "30 - MVC Server", which will update an integer array mapped into a table. All TSQLTag.Occurence integers are stored in a local TSQLTags.Lookup[].Occurence dynamic array, which will be used to display the occurence count of each tag of the articles. The following method will first retrieve ID and Occurence from the database, and update the TSQLTag.Occurence if the internal dynamic array contains a new value.
procedure TSQLTags.SaveOccurence(aRest: TSQLRest);
var tag: TSQLTag;
batch: TSQLRestBatch;
begin
Lock.ProtectMethod;
TAutoFree.Several([
@tag,TSQLTag.CreateAndFillPrepare(aRest,'','RowID,Occurence'),
@batch,TSQLRestBatch.Create(aRest,TSQLTag,1000)]);
while tag.FillOne dobeginif tag.ID<=length(Lookup) thenif Lookup[tag.ID-1].Occurence<>tag.Occurence thenbegin
tag.Occurence := Lookup[tag.ID-1].Occurence;
batch.Update(tag); // will update only Occurence fieldend;
end;
aRest.BatchSend(batch);
end;
In the above code, you can identify:
CreateAndFillPrepare + FillOne methods are able to retrieve all values of the TSQLTag class, and iterate easily over them;
A local TSQLRestBatch is prepared, and will store locally - via batch.Update() - any modification; as we already stated, only the retrieved field (i.e. 'Occurence') will be marked as to be updated;
aRest.BatchSend(batch) will send all new values (if any) to the server, in a single network round trip, and a single transaction;
This method is made thread safe by using Lock.ProtectMethod (Lock is a mutex private to the TSQLTags instance);
Local variables are allocated and automatically relased when the method exits, using TAutoFree.Several() - see Automatic TSQLRecord memory handling - which avoid to write two nested try .. finally Free end loops.
Such a pattern is very common in mORMot, and illustrate how high-level ORM methods can be used instead of manual SQL. With the potential benefit of a much better performance, and cleaner code.
12.3.4. Local network as bottleneck
When using a remote database on a physical network, a round-trip delay occurs for each request, this time between the ORM server side and the external Database engine.
BATCH mode latency issue on external DBAt first, the 1 ms latency due to the external database round-trip may sound negligible. BATCH sequences for adding/updating/deleting records did already shortcut the Internet latency, which was much higher.
But in a Service-Oriented Architecture (SOA), most of the process is done on the server side: the slightest execution delay will induce a noticeable performance penalty. In practice, you won't be able to achieve more than 500-600 requests per second when performing individual INSERT, DELETE or UPDATE statements over any SQL database. Even if run locally on the same server, most SQL databases will suffer from the overhead of inter-process communications, achieving 6,000-7,000 update requests per second at best.
Your customers may not understand why using a SQLite3 engine will be much faster than a dedicated Oracle instance they do pay huge amount of money for, since SQLite3 runs locally in the ORM server process. One common solution is to use stored procedures, or tune the SQL for your database - but you will loose most of the ORM and SOA benefits - see below.
Of course, mORMot can do better than that. Its ORM will automatically use two ways of diminishing the number of round-trips to the database:
Both methods will group all the transmitted data in chunks, as much as possible. Performance will therefore increase, reaching 50,000-60,000 writes per second, depending on the database abilities.
Those features are enabled by default, and the fastest method will always be selected by the ORM core, as soon as it is available on the database back-end. You do not have to worry about configuring your application. Just enjoy its speed.
Our SynDB.pas unit offers some TSQLDBStatement.BindArray() methods, introducing native array binding for faster database batch modifications. It is working in conjunction with our the BATCH methods of the ORM, so that CRUD modification actions will transparently be grouped within one round-trip over the network.
Thanks to this enhancement, inserting records within Oracle (over a 100 Mb Ethernet network) comes from 400-500 rows per second to more than 70,000 rows per second, according to our Data access benchmark.
The great maintainers of the ZEOS Open Source library did especially tune its internals to support mORMot at its full speed, directly accessing the ZDBC layer - see ZEOS via direct ZDBC. The ZEOS 7.2 branch did benefit of a huge code refactoring, and also introduced array binding abilities. This feature will be recognized and handled by our ORM, if available at the ZDBC provider side. Today, only the ZDBC Oracle and Firebird providers do support this feature. But the list is growing.
The FireDAC (formerly AnyDAC) library is the only one implementing this feature (known as Array DML in the FireDAC documentation) around all available Delphi commercial libraries. Enabling it gives a similar performance boost, not only for Oracle, but also MS SQL, Firebird, DB2, MySQL, Informix and PostgreSQL.
In practice, when accessing Oracle, our own direct implementation in SynDBOracle still gives better performance results than the ZDBC / FireDAC implementation.
In fact, some modern database engine (e.g. Oracle or MS SQL) are even faster when using array binding, not only due to the network latency reduce, but to the fact that in such operations, integrity checking and indexes update is performed at the end of the bulk process. If your table has several indexes and constraints, it will make using this feature even faster than a "naive" stored procedure executing individual statements within a loop.
12.3.4.1.2. For faster IN clause
Sometimes, you want to write SELECT statements with a huge IN clause. If the number of the items in the IN expression is stable, you may benefit for a prepared statement, e.g.
SELECT * FROM MyTable WHERE ID IN [?,?,?,?,?]
But if the IDs are not fixed, you should have to create an expression without any parameter, or use a temporary table:
SELECT * FROM MyTable WHERE ID IN [1,4,8,12,24,27]
As an alternative, SynDBOracle provides the ability to bind an array of parameters which may be cast to an Oracle Object, so that you could use it as a single parameter. Current implementation support either TInt64DynArray or TRawUTF8DynArray values, as such:
var
arr: TInt64DynArray = [1, 2, 3];
Query := TSQLDBOracleConnectionProperties.NewThreadSafeStatementPrepared(
'select * from table where table.id in'+
'(select column_value from table(cast(? as SYS.ODCINUMBERLIST)))');
Query.BindArray(1, arr);
Query.ExecutePrepared;
RawUTF8 arrays are also supported (which can be used as fall back in case Int64 arrays are not supported by the client, e.g. with Oracle 10):
var
arr: TRawUTF8DynArray = ['123123423452345', '3124234454351324', '53567568578867867'];
Query := TSQLDBOracleConnectionProperties.NewThreadSafeStatementPrepared(
'select * from table where table.id in'+
'(select column_value from table(cast(? as SYS.ODCIVARCHAR2LIST)))');
Query.BindArray(1, arr);
Query.ExecutePrepared;
From tests on production, this implementation is 2-100 times faster (depending on array and table size) and also simpler, compared to temporary table solution. Drawback is that it is supported by SynDBOracle only by now.
12.3.4.2. Optimized SQL for bulk insert
Sadly, array binding is not available for all databases or libraries. In order to maximize speed, during BATCH insertion, the mORMot ORM kernel is able to generate some optimized SQL statements, depending on the target database, to send several rows of data at once. It induces a noticeable speed increase when saving several objects into an external database.
Automatic multi-INSERT statement generation is available for:
Almost all the supported External SQL database access (in the mORMotDB.pas unit): SQlite3 (3.7.11 and later), MySQL, PostgreSQL, MS SQL Server (2008 and up), Oracle, Firebird, DB2, Informix and NexusDB - and since it is implemented at SQL level, it is available for all supported access libraries, e.g. ODBC, OleDB, Zeos/ZDBC, UniDAC;
And, in the NoSQL form of "documents array" insertion, for the MongoDB database (in the mORMotMongoDB.pas unit).
It means that even providers not implementing array binding (like OleDB, ODBC or UniDAC) are able to have a huge boost at data insertion.
SQlite3, MySQL, PostgreSQL, MSSQL 2008, DB2, Informix and NexusDB handle INSERT statements with multiple VALUES, in the following SQL-92 standard syntax, using parameters:
Oracle implements the weird-but-similar syntax (note the mandatory SELECT at the end):
INSERT ALL
INTO phone_book VALUES ('John Doe', '555-1212')
INTO phone_book VALUES ('Peter Doe', '555-2323')
SELECT * FROM DUAL;
Firebird implements its own syntax:
execute block
as
begin
INSERT INTO phone_book VALUES ('John Doe', '555-1212');
INSERT INTO phone_book VALUES ('Peter Doe', '555-2323');
end
As a result, most engines show a nice speed boost when using the BatchAdd() method. See Data access benchmark for numbers and details.
If you want to use a map/reduce algorithm in your application, or the Unit Of Work pattern - in addition to ORM data access - all those enhancements will speed up a lot your data process. Reading and writing huge amount of data has never been so fast and easy: it is time to replace stored-procedure process by high-level code implemented in your Domain service.
12.4. CRUD level cache
Starting with revision 1.16 of the framework, tuned record cache has been implemented at the CRUD/RESTful level, for specific tables or records, on both the server and client sides.
See ORM Cache for the other data cache patterns available in the framework, mainly JSON global cache at SQlite3 level, on the server side. All mORMot's data caches are using JSON as storage format, which was found to be simple and efficient for this purpose.
12.4.1. Where to cache
In fact, a unique caching mechanism is available at the TSQLRest level, for both TSQLRestClient and TSQLRestServer kind of classes. Therefore, Delphi clients can have their own cache, and the Server can also have its own cache. A client without any cache (e.g. a rough AJAX client) will take advantage of the server cache, at least.
By default, there is no caching at REST level. Then you can use the TSQLRest.Cache property to tune your cache policy for each TSQLRest instance.
CRUD caching in mORMotWhen caching is set on the server for a particular record or table, in-memory values could be retrieved from this cache instead of calling the database engine each time. When properly used, this will increase global server responsiveness and allow more clients to be served with the same hardware.
On the client side, a local in-memory cache could be first checked when a record is to be retrieved. If the item is found, the client uses this cached value. If the data item is not in the local cache, the query is then sent to the server, just as usual. Due to the high latency of a remote client-server request, adding caching on the client side does make sense. Client caching properties can be tuned in order to handle properly remote HTTP access via the Internet, which may be much slower than a local Network.
Our caching implementation is transparent to the CRUD code. The very same usual ORM methods are to be called to access a record (Retrieve Update Add), then either client or server cache will be used, if available. For applications that frequently access the same data - a large category - record-level caching improves both performance and scalability.
12.4.2. When to cache
The main problem with cache is about data that both changes and is accessed simultaneously by multiple clients.
In the current implementation, a "pessimistic" concurrency control is used by our framework, relying on explicit locks, and (ab)use of its Stateless ORM general design. It is up to the coder to ensure that no major confusion could arise from concurrency issues.
You must tune caching at both Client and Server level - each side will probably require its own set of cache options.
In your project implementation, caching should better not to be used at first, but added on need, when performance and efficiency was found to be required. Adding a cache shall imply having automated regression tests available, since in a Client-Server multi-threaded architecture, "premature optimization is the root of all evil" (Donald Knuth).
The main rules may be simply:
Not to cache if it may break something relevant (like a global monetary balance value);
Not to cache unless you need to (see Knuth's wisdom);
Ensure that caching is worth it (if a value is likely to be overridden often, it could be even slower to cache it);
Test once, test twice, always test and do not forget to test even more.
In practice, caching issues could be difficult to track. So in case of doubt (why was this data not accurate? it sounds like an old revision?), you may immediately disable caching, then ensure that you were not too optimistic about your cache policy.
12.4.3. What to cache
Typical content of these two tuned caches can be any global configuration settings, or any other kind of unchanging data which is not likely to vary often, and is accessed simultaneously by multiple clients, such as catalog information for an on-line retailer.
Another good use of caching is to store data that changes but is accessed by only one client at a time. By setting a cache at the client level for such content, the server won't be called often to retrieve the client-specific data. In such case, the problem of handling concurrent access to the cached data doesn't arise.
Profiling can be necessary to identify which data is to be registered within those caches, either at the client and/or the server side. The logging feature - see below - integrated to mORMot can be very handy to tune the caching settings, due to its unique customer-side profiling ability.
But most of the time, an human guess at the business logic level is enough to set which data is to be cached on each side, and ensure content coherency.
By default, REST level cache is disabled, until you call TSQLRest.Cache's SetCache() and SetTimeOut() methods. Those methods will define the caching policy, able to specify which table(s) or record(s) are to be cached, either at the client or the server level.
Once enabled for a table and a set of IDs on a given table, any further call to TSQLRest.Retrieve(aClass,aID) or TSQLRecord.Create(aRest,aID) will first attempt to retrieve the TSQLRecord of the given aID from the internal TSQLRestCache instance's in-memory cache, if available. Note that more complex requests, like queries on other fields than the ID primary key, or JOINed queries, won't be cached at REST level. But such requests may benefit of the JSON global cache, at SQLite3 level, on the server side.
For instance, here is how the Client-side caching is tested about one individual record:
(...)
Client.Cache.SetCache(TSQLRecordPeople); // cache whole table
TestOne;
Client.Cache.Clear; // reset cache settingsClient.Cache.SetCache(Rec); // cache one record// same as Client.Cache.SetCache(TSQLRecordPeople,Rec.ID);
TestOne;
(...)
Database.Cache.SetCache(TSQLRecordPeople); // server-side
(...)
In the above code, Client.Cache.Clear is used to reset all cache settings (i.e. not only flush the cache content, but delete all settings previously made with Cache.SetCache() or Cache.SetTimeOut() calls. So in the above code, a global cache is first enabled for the whole TSQLRecordPeople table, then the cache settings are reset, then cache is enabled for only the particular Rec record. To reset the cache content (e.g. if you consider some values may be deprecated), just call the Cache.Flush methods (able to flush the in-memory cache for all tables, a given table, or a given record).
It's worth warning once again that it's up to the code responsibility to ensure that these caches are consistent over the network. Server side and client side have their own coherency profile to be ensured. The caching policy has to match your data model, and application use cases.
On the Client side, only local CRUD operations are tracked. According to the stateless design, adding a time out value does definitively make sense, unless the corresponding data is known to be dedicated to this particular client (like a session data). If no time out period is set, it's up to the client to flush its own cache on purpose, by using TSQLRestClient.Cache.Flush() methods.
On the Server side, all CRUD operations of the ORM (like Add / Update / Delete) will be tracked, and cache will be notified of any data change. But direct SQL statements changing table contents (like a UPDATE or a DELETE over one or multiple rows with a WHERE clause) are not tracked by the current implementation: in such case, you'll have to manually flush the server cache content, to enforce data coherency. If such statements did occur on the server side, TSQLRestServer.Cache.Flush() methods are to be called, e.g. in the services which executed the corresponding SQL. If such non-CRUD statements did occur on the client side, it is possible to ensure that the server content is coherent with the client side, via a dedicated TSQLRestClientURI.ServerCacheFlush() method, which will call a dedicated standard service on the server to flush its cache content on purpose.
12.4.5. Business logic and API cache
If your implementation follows a good design - see below - the high-level logic is encapsulated into business types, and you won't use directly the TSQLRecord definitions. Another good practice is to define DTO types - see below - probably as records or dynamic arrays.
The best performance will be achieved if the data is already known by the service, and returned immediately. Even if our ORM is very fast - thanks to its diverse cache levels we just wrote about - it may be hosted in another service, so a network delay may occur. The less communication, the better.
You may consider using TSynDictionary instances over your business objects, or your DTO objects. You may start with no cache in the business or application layers, but once some bottlenecks are identified - e.g. by carefully looking at the logs generated by the framework below, defining some TSynDictionary instances could help a lot. To release memory, don't forget to setup a proper TimeOutSeconds value.
13. Server side SQL/ORM process
Adopt a mORMotIn your developer background and history, you may have been used to write your business code as stored procedures, to be executed on the server side. In short, a stored procedure is a way of moving some data-intensive SQL process on the database side. A client will ask for some data to be retrieved or processed on the server, and all actions will be taken on the server: since no data has to be exchanged between the client and the server, such a feature is usually much faster than a pure client-sided solution.
Since mORMot is Client/Server from the ground up, it features some unique ways of improving data-intensive process on the client or server sides, without necessary relying on proprietary stored procedures.
This chapter is worth reading, if you start a new mORMot project, and wonder about the architecture of your upcoming applications, or if you are integrating a mORMot server in an existing application... in which you or your predecessors may have (ab)used of stored procedures. It is time to sit down first, and take counsel how your project may be optimized enough to scale and profit.
13.1. Optimize for performance
So, let's do it the mORMot's way.
As we discussed, the main point about stored procedures is performance. But they are not magic bullet either: we all have seen slow and endless process in stored procedures, almost killing a database server in production. Just as with regular client-side process.
And don't be fooled by performance: make it right, then make it fast. We could make ourself a motto of this Martin Fowler's remark:
One of the first questions people consider with this kind of thing is performance. Personally I don't think performance should be the first question. My philosophy is that most of the time you should focus on writing maintainable code. Then use a profiler to identify hot spots and then replace only those hot spots with faster but less clear code. The main reason I do this is because in most systems only a very small proportion of the code is actually performance critical, and it's much easier to improve the performance of well factored maintainable code.
If you are using a mORMot server for the first time, you may be amazed by how most common process will sound just immediate. You can capitalize on the framework optimizations, which are able to unleash the computing power of your hardware, then refine your code only when performance matters.
In order to speed up our data processing, we first have to consider the classic architecture of a mORMot application - see General mORMot architecture - Client Server implementation. A mORMot client will have two means of accessing its data:
Either from CRUD / ORM methods;
Or via services, most probably interface-based services - see below.
Our optimization goals will therefore be leaded into those two directions.
13.1.1. Profiling your application
If you worry about performance, first reflex may be to enable the framework logging, even on customer sites.
The profiling abilities of the TSynLog class, used everywhere in our framework, will allow you to identify the potential bottlenecks of your client applications. See below about this feature. From our experiment, we can assure you that the first time you setup mORMot advanced logging and profiling on a real application, you may find issues you may never have think of by yourself.
Fight against any duplicated queries, unnecessary returned information (do not forget to specify the fields to be retrieved to your CreateAndFillPrepare() request), unattended requests triggered by the User Interface (typically a TEdit's OnChange event may benefit of using a TTimer before asking for auto-completion)...
Once you have done this cleaning, you may be proud of you, and it may be enough for your customers to enjoy your application. You deserve a coffee or a drink.
13.1.2. Client-side caching
You may have written most of your business logic on the client side, and use CRUD / ORM methods to retrieve the information from your mORMot server. This is perfectly valid, but may be slow, especially if a lot of individual requests are performed over the network, which may be with high latency (e.g. over the Internet).
A new step of optimization, once you did identify your application bottlenecks via profiling, may be to tune your ORM client cache. In fact, there are several layers of caching available in a mORMot application. See CRUD level cache for details about these features. Marking some tables as potentially cached on the client side may induce a noticeable performance boost, with no need of changing your client code.
13.1.3. Write business logic as Services
A further step of optimization may be to let the business logic be processed on the server side, within a service. In fact, you will then switch your mind from a classic Multi-tier architecture to a Service-Oriented Architecture (SOA).
As a result, your process may take much less time, and you may also be able to benefit from some other optimization tricks, like dedicated caching in your service. For instance, consider writing your service in sicShared mode instead of the default sicSingle mode - see below - and let some intangible objects be stored as implementation class fields: the next request from a client will not need to load this data from the database, but instead retrieve the information directly from memory, with no latency. You may also consider using sicClientDriven mode, and cache some client-specific information as implementation class fields.
Beside optimization, your code will probably become easier to maintain and scale, when running as services. SOA is indeed a very convenient pattern, and will induce nice side effects, like the ability to switch to multi-platform clients, including mobile or AJAX, since your business logic will stay on the application server.
13.1.4. Using ORM full power
On the server side, your business code, written using CRUD / ORM methods, could be optimized.
First of all, ORM caching may also be used. Any unneeded round-trip to the database - even more with External SQL database access - could impact your application responsiveness. Then your business logic, written as services, will benefit from it.
In fact, we found out that Array DML or optimized INSERT could be much faster than a regular stored procedure, with individual SQL statements run in a loop.
13.2. Stored procedures
13.2.1. Why to avoid stored procedures
In practice, stored procedures have some huge drawbacks:
Your business logic is tied to the data layout used for storage - and the Relational Model is far away from natural language - see below;
Debugging is somewhat difficult, since stored procedures will be executed on the database server, far away from your application;
Each developer will probably need its own database instance to be able to debug its own set of stored procedures;
Project deployment becomes complex, since you have to synchronize your application and database server;
Cursors and temporary tables, as commonly used in stored procedures, may hurt performance;
They couple you with a particular database engine: you are tied to use Java, C# or a P/SQL variant to write your business code, then switching from Oracle to PostgreSQL or MS SQL will be error prone, if not impossible;
They may consume some precious hardware resources on your database server, which may be limited (e.g. proprietary engines like Oracle or MS SQL will force you to use only one CPU or a limited amount of RAM, unless you need to spend a lot of money to increase your license abilities);
You will probably have limitations in the virtual environment running in your database engine: deprecated VM or libraries, restricted access to files or network due to security requirements, missing libraries;
Inefficiency of parameters passing, especially when compared with class OOP programming - you are back to the procedural mode of the 80s;
Parameters passing will probably result in sub-optimal SQL statements, handling all passed values even if not used;
Flat design of stored procedures interfaces, far away from the interface segregation principle - see below;
Let several versions of your business logic coexist on the same server is a nightmare to maintain;
Unit testing is made difficult, since you won't be able to mock or stub - see below - your stored procedures or your data;
No popular SQL engine does allow stored procedures to be written in Delphi, so you won't be able to share code with your other projects;
If you use an ORM in your main application, you need to manually maintain the table schema used in your stored procedures in synch with your object model - so you are loosing most of ORM benefits;
What if you want to switch to NoSQL storage, or a simple stand-alone version of your application?
We do not want to say, dogmatically, that stored procedures are absolute evil. Of course, you are free to use them, even with mORMot. All we wanted to point out is the fact that they are perhaps not the best fit with the design we would like to follow.
13.2.2. Stored procedures, anyway
There may be some cases where this ORM point of view, may be not enough for your project. Do not worry, as usual mORMot will allow you to do what you need.
The Server-Side services - see below and below - appear to be the more RESTful compatible way of implementing a stored procedure mechanism in our framework, then consume them from a mORMot client.
According to the current state of our framework, there are several ways of handling such a server-side SQL/ORM process:
Write your own SQL function to be used in SQLite3 WHERE statements;
Low-level dedicated Delphi stored procedures;
External databases stored procedures.
We will discuss those options. The first two will in fact implement two types of "stored procedure" at SQL level in pure Delphi code, making our SQlite3 kernel as powerful as other Client-Server RDBMS solutions. The latest option may be considered, especially when moving from legacy applications, still relying on stored procedures for their business logic.
13.2.2.1. Custom SQL functions
The SQLite3 engine defines some standard SQL functions, like abs() min() max() or upper(). A complete list is available at http://www.sqlite.org/lang_corefunc.html
One of the greatest SQLite3 feature is the ability to define custom SQL functions in high-level language. In fact, its C API allows implementing new functions which may be called within a SQL query. In other database engine, such functions are usually named UDF (for User Defined Functions).
Some custom already defined SQL functions are defined by the framework. You may have to use, on the Server-side:
Soundex SoundexFR SoundexES for computing the English / French / Spanish soundex value of any text;
IntegerDynArrayContains, ByteDynArrayContains, WordDynArrayContains, CardinalDynArrayContains, Int64DynArrayContains, CurrencyDynArrayContains, RawUTF8DynArrayContainsCase, RawDynArrayContainsNoCase for direct search inside a BLOB column containing some dynamic array binary content (expecting either an INTEGER or a TEXT search value as 2nd parameter).
Those functions are no part of the SQlite3 engine, but are available inside our ORM to handle BLOB containing dynamic array properties, as stated in Dynamic arrays from SQL code.
Since you may use such SQL functions in an UPDATE or INSERT SQL statement, you may have an easy way of implementing server-side process of complex data, as such:
UPDATE MyTable SET SomeField=0 WHERE IntegerDynArrayContains(IntArrayField,:(10):)
13.2.2.1.1. Implementing a function
Let us implement a CharIndex() SQL function, defined as such:
CharIndex ( SubText, Text [ , StartPos ] )
In here, SubText is the string of characters to look for in Text. StartPos indicates the starting index where charindex() should start looking for SubText in Text. Function shall return the position where the match occurred, 0 when no match occurs. Characters are counted from 1, just like in PosEx()Delphi function.
The SQL function implementation pattern itself is explained in the sqlite3.create_function_v2() and TSQLFunctionFunc:
argc is the number of supplied parameters, which are available in argv[] array (you can call ErrorWrongNumberOfArgs(Context) in case of unexpected incoming number of parameters);
Use sqlite3.value_*(argv[*]) functions to retrieve a parameter value;
Then set the result value using sqlite3.result_*(Context,*) functions.
Here is typical implementation code of the CharIndex() SQL function, calling the expected low-level SQLite3 API (note the cdecl calling convention, since it is a SQLite3 / C callback function):
procedure InternalSQLFunctionCharIndex(Context: TSQLite3FunctionContext;
argc: integer; var argv: TSQLite3ValueArray); cdecl;
var StartPos: integer;
begincase argc of
2: StartPos := 1;
3: beginStartPos := sqlite3.value_int64(argv[2]);if StartPos<=0 then
StartPos := 1;
end;
elsebeginErrorWrongNumberOfArgs(Context);
exit;
end;
end;
if (sqlite3.value_type(argv[0])=SQLITE_NULL) or
(sqlite3.value_type(argv[1])=SQLITE_NULL) then
sqlite3.result_int64(Context,0) elsesqlite3.result_int64(Context,SynCommons.PosEx(sqlite3.value_text(argv[0]),sqlite3.value_text(argv[1]),StartPos));end;
This code just get the parameters values using sqlite3.value_*() functions, then call the PosEx() function to return the position of the supplied text, as an INTEGER, using sqlite3.result_int64().
The local StartPos variable is used to check for an optional third parameter to the SQL function, to specify the character index to start searching from.
The special case of a NULL parameter is handled by checking the incoming argument type, calling sqlite3.value_type(argv[]).
13.2.2.1.2. Registering a function
13.2.2.1.2.1. Direct low-level SQLite3 registration
Since we have a InternalSQLFunctionCharIndex() function defined, we may register it with direct SQLite3 API calls, as such:
The function is registered twice, one time with 2 parameters, then with 3 parameters, to add an overloaded version with the optional StartPos parameter.
The RegisterSQLFunction() method is called twice, one time with 2 parameters, then with 3 parameters, to add an overloaded version with the optional StartPos parameter, as expected.
13.2.2.1.2.3. Custom class definition
The generic function definition may be completed, in our framework, with a custom class definition, which is handy to have some specific context, not only relative to the current SQL function context, but global and static to the whole application process.
TSQLDataBaseSQLFunction classes hierarchyFor instance, the following method will register a SQL function able to search into a BLOB-stored custom dynamic array type:
A fDummyDynArray TDynArray instance which will handle the dynamic array RTTI handling;
A fDummyDynArrayValue pointer, to be used to store the dynamic array reference values to be used during the dynamic array process.
Here is the corresponding class definition:
/// to be used to define custom SQL functions for dynamic arrays BLOB searchTSQLDataBaseSQLFunctionDynArray = class(TSQLDataBaseSQLFunction)
protected
fDummyDynArray: TDynArray;
fDummyDynArrayValue: pointer;
public/// initialize the corresponding SQL function// - if the function name is not specified, it will be retrieved from the type// information (e.g. TReferenceDynArray will declare 'ReferenceDynArray')// - the SQL function will expect two parameters: the first is the BLOB// field content, and the 2nd is the array element to search (set with// TDynArray.ElemSave() or with BinToBase64WithMagic(aDynArray.ElemSave())// if called via a Client and a JSON prepared parameter)// - you should better use the already existing faster SQL functions// Byte/Word/Integer/Cardinal/Int64/CurrencyDynArrayContains() if possible// (this implementation will allocate each dynamic array into memory before// comparison, and will be therefore slower than those optimized versions)constructor Create(aTypeInfo: pointer; aCompare: TDynArraySortCompare;
const aFunctionName: RawUTF8=''); override;
end;
The InternalSQLFunctionDynArrayBlob function is a low-level SQlite3 engine SQL function prototype, which will retrieve a BLOB content, then un-serialize it into a dynamic array (using the fDummyDynArrayValue. LoadFrom method), then call the standard ElemLoadFind method to search the supplied element, as such:
(...)
with Func.fDummyDynArray dotryLoadFrom(DynArray); // temporary allocate all dynamic array contenttryif ElemLoadFind(Elem)<0 thenDynArray := nil;
finally
Clear; // release temporary array content in fDummyDynArrayValueend;
(...)
You can define a similar class in order to implement your own custom SQL function.
Here is how a custom SQL function using this TSQLDataBaseSQLFunctionDynArray class is registered in the supplied unitary tests to an existing database connection:
This new SQL function expects two BLOBs arguments, the first being a reference to the BLOB column, and the 2nd the searched value. The function can be called as such (lines extracted from the framework regression tests):
Note that since the 2nd parameter is expected to be a BLOB representation of the searched value, the BinToBase64WithMagic function is used to create a BLOB parameter, as expected by the ORM. Here, the element type is an integer, which is a pure binary variable (containing no reference-counted internal fields): so we use direct mapping from its binary in-memory representation; for more complex element type, you should use the generic BinToBase64WithMagic(aDynArray.ElemSave()) expression instead, calling TDynArray. ElemSave method.
Note that we did not use here the overloaded OneFieldValues method expecting '?' bound parameters here, but we may have use it as such:
Since the MyIntegerDynArrayContains function will create a temporary dynamic array in memory from each row (stored in fDummyDynArrayValue), the dedicated IntegerDynArrayContains SQL function is faster.
13.2.2.2. Low-level SQLite3 stored procedure in Delphi
To implement a more complete request, and handle any kind of stored data in a column (for instance, some TEXT format to be parsed), a TOnSQLStoredProc event handler can be called for every row of a prepared statement, and is able to access directly to the database request. This event handler can be specified to the TSQLRestServerDB.StoredProcExecute() method. Be aware that code inside this event handler should not use the ORM methods of the framework, but direct low-level SQLite3 access (to avoid re-entrance issues).
This will allow direct content modification during the SELECT statement. Be aware that, up to now, Virtual Tables magicTSQLVirtualTableCursorJSON cursors are not safe to be used if the Virtual Table data is modified.
See the description of the TOnSQLStoredProc event handler and associated StoredProcExecute() method in the second part of this document.
13.2.2.3. External stored procedure
If the application relies on external databases - see External SQL database access - the external database may be located on a remote computer.
In such situation, all RESTful Server-sided solutions could produce a lot of network traffic. In fact, custom SQL functions or stored procedures both use the SQLite3 engine as root component.
In order to speed up the process, you may define some RDMS stored procedures in the external database syntax (P/SQL, .Net, Java or whatever), then define some below to launch those functions. Note that in this case, you'll loose the database independence of the framework, and most of the benefits of using an ORM/ODM - later on, switching to another database engine may become impossible. Such RDBMS stored procedures may be envisaged only during the transition phase of an existing application. BATCH sequences for adding/updating/deleting records has almost all the speed advantages of stored procedures, with the benefit of a pure object oriented code, easy to debug and maintain.
13.3. Server side Services
Adopt a mORMotIn order to follow a Service-Oriented Architecture (SOA) design, your application's business logic can be implemented in several ways using mORMot:
Via some TSQLRecord inherited classes, inserted into the database model, and accessible via some RESTful URI - this is implemented by our ORM architecture - see Client-Server process;
By some RESTful services, implemented in the Server as published methods, and consumed in the Client via native Delphi methods;
Defining some RESTful service contracts as standard Delphiinterface, and then run it seamlesly on both client and client sides.
The first is similar to RemObject's DataAbstract product, which allows remote access to database, over several protocols. There are some similarities with mORMot (like on-the-fly SQL translation for external databases), but also a whole diverse use case (RAD/components and wizards versus ORM/MVC) and implementation (mORMot takes advantages of the SQLite3 SQL core and is much more optimized for speed and scaling).
If you paid for a Delphi Architect edition, the first two items can be compared to the DataSnap Client-Server features. Since Delphi 2010, you can in fact define JSON-based RESTful services, in addition to the original DCOM/DBExpress remote data broker. It makes uses of the new RTTI available since Delphi 2010, but it has some known stability and performance issues, and lack of strong security. It is also RAD/Wizard based, whereas mORMot uses a code approach.
The last item is purely interface-based, so matches the "designed by contract" principle - see below - as implemented by Microsoft's WCF technology - see below. We included most of the nice features made available in WCF in mORMot, in a KISS convention over configuration manner.
So mORMot is quite unique, in the fact that it features, in one unique code base, all three ways of implementing a SOA application. And it is an Open Source project, existing since years - you won't be stucked with proprietary code nor licenses. You can move your existing code base into a Domain-Driven Design, on your management pace (and money), without the need of upgrading to the latest version of the IDE.
14. Client-Server services via methods
Adopt a mORMotTo implement a service in the Synopse mORMot framework, the first method is to define published method Server-side, then use easy functions about JSON or URL-parameters to get the request encoded and decoded as expected, on Client-side.
We'll implement the same example as in the official Embarcadero docwiki page above. Add two numbers. Very useful service, isn't it?
14.1. Publishing a service on the server
On the server side, we need to customize the standard TSQLRestServer class definition (more precisely a TSQLRestServerDB class which includes a SQlite3 engine, or a lighter TSQLRestServerFullMemory kind of server, which is enough for our purpose), by adding a new published method:
The method name ("Sum") will be used for the URI encoding, and will be called remotely from ModelRoot/Sum URL. The ModelRoot is the one defined in the Root parameter of the model used by the application.
This method, like all Server-side methods, MUST have the same exact parameter definition as in the TSQLRestServerCallBack prototype, i.e. only one Ctxt parameter, which refers to the whole execution context:
The Ctxt variable publish some properties named InputInt[] InputDouble[] InputUTF8[] and Input[] able to retrieve directly a parameter value from its name, respectively as Integer/Int64, double, RawUTF8 or variant. The Ctxt.Input[] array property, returning variant values, has been defined as default array property for the TSQLRestServerURIContext class, so writing Ctxt['a'] is the same as writing Ctxt.Input['a'].
Therefore, the code above using Ctxt[] or Ctxt.Input[] will introduce a conversion via a variant, which may be a bit slower, and in case of string content, may loose some content for older non Unicode versions of Delphi. So it is a good idea to use the exact expected Input*[] property corresponding to your value type. It does make sense even more when handling text, i.e. InputUTF8[] is to be used in such case. For our floating-point computation method, we may have coded it as such:
Those methods will raise an EParsingException exception if the parameter is not available at the URI. So you may want to use InputExists[] or even InputIntOrVoid[] InputDoubleOrVoid[] InputUTF8OrVoid[] InputOrVoid[] methods, which won't raise any exception but return a void value (i.e. either 0, "" or Unassigned).
The Ctxt.Results([]) method is used to return the service value as one JSON object with one "Result" member, with default MIME-type JSON_CONTENT_TYPE.
For instance, the following request URI:
GET /root/Sum?a=3.12&b=4.2
will let our server method return the following JSON object:
{"Result":7.32}
That is, a perfectly AJAX-friendly request.
Note that all parameters are expected to be plain case-insensitive 'A'..'Z','0'..'9' characters.
An important point is to remember that the implementation of the callback method must be thread-safe - as stated by Thread-safety and Safe locks for multi-thread applications. In fact, the TSQLRestServer.URI method expects such callbacks to handle the thread-safety on their side. It's perhaps some more work to handle a critical section in the implementation, but, in practice, it's the best way to achieve performance and scalability: the resource locking can be made at the tiniest code level.
14.2. Defining the client
The client-side is implemented by calling some dedicated methods, and providing the service name ('sum') and its associated parameters:
function Sum(aClient: TSQLRestClientURI; a, b: double): double;
var err: integer;
begin
val(aClient.CallBackGetResult('sum',['a',a,'b',b]),Result,err);
end;
You could even implement this method in a dedicated client method - which make sense:
type
TMyClient = class(TSQLHttpClient) // could be TSQLRestClientURINamedPipe
(...)
function Sum(a, b: double): double;
(...) function TMyClient.Sum(a, b: double): double;
var err: integer;
begin
val(CallBackGetResult('sum',['a',a,'b',b]),Result,err);
end;
This later implementation is to be preferred on real applications.
You have to create the server instance, and the corresponding TSQLRestClientURI (or TMyClient), with the same database model, just as usual...
On the Client side, you can use the CallBackGetResult method to call the service from its name and its expected parameters, or create your own caller using the UrlEncode() function. Note that you can specify most class instance into its JSON representation by using some TObject into the method arguments:
function TMyClient.SumMyObject(a, b: TMyObject): double;
var err: integer;
begin
val(CallBackGetResult('summyobject',['a',a,'b',b]),Result,err);
end;
This Client-Server protocol uses JSON here, as encoded server-side via Ctxt.Results() method, but you can serve any kind of data, binary, HTML, whatever... just by overriding the content type on the server with Ctxt.Returns().
14.3. Direct parameter marshalling on server side
We have used above the Ctxt[] and Ctxt.Input*[] properties to retrieve the input parameters. This is pretty easy to use and powerful, but the supplied Ctxt gives full access to the input and output context.
Here is how we may implement the fastest possible parameters parsing - see sample Project06Server.dpr:
The only not obvious part of this code is the parameters marshalling, i.e. how the values are retrieved from the incoming Ctxt.Parameters text buffer, then converted into native local variables.
On the Server side, typical implementation steps are therefore:
Use the UrlDecodeNeedParameters function to check that all expected parameters were supplied by the caller in Ctxt.Parameters;
Implement the service (here it is just the a+b expression);
Then return the result calling Ctxt.Results() method or Ctxt.Error() in case of any error.
The powerful UrlDecodeObject function (defined in mORMot.pas) can be used to un-serialize most class instance from its textual JSON representation (TPersistent, TSQLRecord, TStringList...).
Using Ctxt.Results() will encode the specified values as a JSON object with one "Result" member, with default mime-type JSON_CONTENT_TYPE:
{"Result":"OneValue"}
or a JSON object containing an array:
{"Result":["One","two"]}
14.4. Returns non-JSON content
Using Ctxt.Returns() will let the method return the content in any format, e.g. as a JSON object (via the overloaded Ctxt.Returns([]) method expecting field name/value pairs), or any content, since the returned MIME-type can be defined as a parameter to Ctxt.Returns() - it may be useful to specify another mime-type than the default constant JSON_CONTENT_TYPE, i.e. 'application/json; charset=UTF-8', and returns plain text, HTML or binary.
For instance, you can return directly a value as plain text:
The corresponding client method may be defined as such:
function TMyClient.GetFile(const aFileName: RawUTF8): RawByteString;
beginif CallBackGet('GetFile',['filename',aFileName],RawUTF8(result))<>HTTP_SUCCESSthenraise Exception.CreateFmt('Impossible to get file: %s',[result]);
end;
Note that the Ctxt.ReturnFile() method - see below - is preferred than manual file retrieval as implemented in this TSQLRestServer.GetFile() method. It is shown here for demonstration purposes only.
If you use HTTP as communication protocol, you can consume these services, implemented Server-Side in fast Delphi code, with any AJAX application on the client side.
Using GetMimeContentType() when sending non JSON content (e.g. picture, pdf file, binary...) will be interpreted as expected by any standard Internet browser: it could be used to serve some good old HTML content within a page, not necessary consume the service via JavaScript .
14.5. Advanced process on server side
On server side, method definition has only one Ctxt parameter, which has several members at calling time, and publish all service calling features and context, including RESTful URI routing, session handling or low-level HTTP headers (if any).
At first, Ctxt may indicate the expected TSQLRecord ID and TSQLRecord class, as decoded from RESTful URI. It means that a service can be related to any table/class of our ORM framework, so you will be able to create easily any RESTful compatible requests on URI like ModelRoot/TableName/TableID/MethodName. The ID of the corresponding record is decoded from its RESTful scheme into Ctxt.TableID, and the table is available in Ctxt.Table or Ctxt.TableIndex (if you need its index in the associated server Model).
For example, here we return a BLOB field content as hexadecimal, according to its TableName/TableID:
If authentication - see below - is used, the current session, user and group IDs are available in Session / SessionUser / SessionGroup fields. If authentication is not available, those fields are meaningless: in fact, Ctxt.Context.Session will contain either 0 (CONST_AUTHENTICATION_SESSION_NOT_STARTED) if any session is not yet started, or 1 (CONST_AUTHENTICATION_NOT_USED) if authentication mode is not active. Server-side implementation can use the TSQLRestServer.SessionGetUser method to retrieve the corresponding user details (note that when using this method, the returned TSQLAuthUser instance is a local thread-safe copy which shall be freed when done).
In Ctxt.Call^ member, you can access low-level communication content, i.e. all incoming and outgoing values, including headers and message body. Depending on the transmission protocol used, you can retrieve e.g. HTTP header information. For instance, here is how you may access the client remote IP address and application User-Agent, at lowest level:
Of course, for those fields, it is much preferred to use the Ctxt.RemoteIP or Ctxt.UserAgent properties, which use an efficient cache.
14.6. Browser speed-up for unmodified requests
When used over a slow network (e.g. over the Internet), you can set the optional Handle304NotModified parameter of both Ctxt.Returns() and Ctxt.Results() methods to return the response body only if it has changed since last time.
In practice, result content will be hashed (using crc32c algorithm, and fast SSE4.2 hardware instruction, if available) and in case of no modification will return "304 Not Modified" status to the browser, without the actual result content. Therefore, the response will be transmitted and received much faster, and will save a lot of bandwidth, especially in case of periodic server pooling (e.g. for client screen refresh).
Note that in case of hash collision of the crc32c algorithm (we never did see it happen, but such a mathematical possibility exists), a false positive "not modified" status may be returned; this option is therefore unset by default, and should be enabled only if your client does not handle any sensitive accounting process, for instance.
Be aware that you should disable authentication for the methods using this Handle304NotModified parameter, via a TSQLRestServer.ServiceMethodByPassAuthentication() call. In fact, our RESTful authentication - see below - uses a per-URI signature, which change very often (to avoid men-in-the-middle attacks). Therefore, any browser-side caching benefit will be voided if authentication is used: browser internal cache will tend to grow for nothing since the previous URIs are deprecated, and it will be a cache-miss most of the time. But when serving some static content (e.g. HTML content, fixed JSON values or even UI binaries), this browser-side caching can be very useful.
This stateless REST model will enable several levels of caching, even using an external Content Delivery Network (CDN) service. See below for some potential hosting architectures, which may let your mORMot server scale to thousands of concurrent users, served around the world with the best responsiveness.
14.7. Returning file content
Framework's HTTP server is able to handle returning a file as response to a method-based service. The High-performance http.sys server is even able to serve the file content asynchronously from kernel mode, with outstanding performance.
You can use the Ctxt.ReturnFile() method to return a file directly. This method is also able to guess the MIME type from the file extension, and handle HTTP_NOTMODIFIED = 304 process, if Handle304NotModified parameter is true, using the file time stamp.
Another possibility may be to use the Ctxt.ReturnFileFromFolder() method, which is able to efficiently return any file specified by its URI, from a local folder. It may be very handy to
return some static web content from a mORMot HTTP server.
14.8. JSON Web Tokens (JWT)
JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA. They can be used for:
Authentication: including a JWT to any HTTP request allows Single Sign On user validation across different domains;
Secure Information Exchange: a small amount of data can be stored in the JWT payload, and is digitally signed to ensure its provenance and integrity.
See http://jwt.io for an introduction to JSON Web Tokens.
Our framework implements JWT:
"HS256/384/512" (HMAC-SHA2-256/384/512), "ES256" (256-bit ECDSA) standard algorithms, and "S3256/384/512" (for non-yet-standard SHA3-256/384/512) - with the addition of the "none" weak algo, to be used with caution;
Computes and validates all JWT claims: dates, audiences, JWT ID;
Thread-safe and high performance (2 us for a HS256 verification under x64), with optional in-memory cache if needed (e.g. for slower ES256);
Stand-alone and cross-platform code: no external dll, works with Delphi or FPC;
The issuer has been encoded as an expected "iss": field, "iat" and "exp" fields contain the issuing and expiration timestamps, and "jti" has been filled with an obfuscated TSynUniqueIdentifier as JWT ID. Since we use a TJWTHS256 class, HMAC-SHA256 digital signature of the header and payload has then been appended - with a secret safely derivated from 'sec' passphrase using 10 rounds of a PBKDF2_HMAC_SHA256 derivation (in practice, you may use a much higher number like 20,000).
Then you can decode such a token, and access its payload in a single method:
The above method will define a method-based service returning the content of a local folder, only if a valid JWT is supplied within the HTTP headers of the incoming request. If AuthenticationCheck fails to validate the token supplied in the associated Ctxt, if will return 401 HTTP_UNAUTHORIZED to the client, as expected.
When using Ctxt.Input*[] properties, any missing parameter will raise an EParsingException. It will therefore be intercepted by the server process (as any other exception), and returned to the client with an error message containing the Exception class name and its associated message.
But you can have full access to the error workflow, if needed. In fact, calling either Ctxt.Results(), Ctxt.Returns(), Ctxt.Success() or Ctxt.Error() will specify the HTTP status code (e.g. 200 / "OK" for Results() and Success() methods by default, or 400 / "Bad Request" for Error()) as an integer value. For instance, here is how a service not returning any content can handle those status/error codes:
procedureTSQLRestServer.Batch(Ctxt: TSQLRestServerURIContext);
beginif (Ctxt.Method=mPUT) and RunBatch(nil,nil,Ctxt) then
Ctxt.Success else
Ctxt.Error;
end;
In case of an error on the server side, you may call Ctxt.Error() method (only the two valid status codes are 200 and 201).
The Ctxt.Error() method has an optional parameter to specify a custom error message in plain English, which will be returned to the client in case of an invalid status code. If no custom text is specified, the framework will return the corresponding generic HTTP status text (e.g. "Bad Request" for default status code HTTP_BADREQUEST = 400).
In this case, the client will receive a corresponding serialized JSON error object, e.g. for Ctxt.Error('Missing Parameter',HTTP_NOTFOUND):
If called from an AJAX client, or a browser, this content should be easy to interpret.
Note that the framework core will catch any exception during the method execution, and will return a "Internal Server Error" / HTTP_SERVERERROR = 500 error code with the associated textual exception details.
14.10. Benefits and limitations of this implementation
Method-based services allow fast and direct access to all mORMot Client-Server RESTful features, over all usual protocols of our framework: HTTP/1.1, Named Pipe, Windows Messages, direct in-memory/in-process access.
The mORMot implementation of method-based services gives full access to the lowest-level of the framework core, so it has some advantages:
It can be tuned to fit any purpose (such as retrieving or returning some HTML or binary data, or modifying the HTTP headers on the fly);
It is integrated into the RESTful URI model, so it can be related to any table/class of our ORM framework (like DataAsHex service above), or it can handle any remote query (e.g. any AJAX or SOAP requests);
It has a very low performance overhead, so can be used to reduce server workload for some common tasks.
Note that due to this implementation pattern, the mORMot service implementation is very fast, and not sensitive to the "Hash collision attack" security issue, as reported with Apache - see http://blog.synopse.info/post/2011/12/30/Hash-collision-attack for details.
But with this implementation, a lot of process (e.g. parameter marshalling) is to be done by hand on both client and server side code. In addition, building and maintaining a huge SOA system with a "method by method" approach could be difficult, since it publishes one big "flat" set of services. This is were interfaces enter the scene.
15. Interfaces
Adopt a mORMot
15.1. Delphi and interfaces
15.1.1. Declaring an interface
No, interface(-book) is not another social network, sorry.
In DelphiOOP model, an interface defines a type that comprises abstract virtual methods. The short, easy definition is that an interface is a declaration of functionality without an implementation of that functionality. It defines "what" is available, not "how" it is made available. This is the so called "abstraction" benefit of interfaces (there are another benefits, like orthogonality of interfaces to classes, but we'll see it later).
In Delphi, we can declare an interface like so:
typeICalculator = interface(IInvokable)
['{9A60C8ED-CEB2-4E09-87D4-4A16F496E5FE}']
/// add two signed 32-bit integersfunction Add(n1,n2: integer): integer;
end;
It just sounds like a class definition, but, as you can see:
It is named ICalculator, and not TCalculator: it is a common convention to start an interface name with a I, to make a difference with a T for a class or other implementation-level type definition;
There is no visibility attribute (no private / protected / public / published keywords): in fact, it is just as if all methods were published;
There is no fields, just methods (fields are part of the implementation, not of the interface): in fact, you can have properties in your interface definition, but those properties shall redirect to existing getter and setter methods, via read and write keywords;
There is a strange number below the interface name, called a GUID: this is an unique identifier of the interface - you can create such a genuine constant on the editor cursor position by pressing Ctrl + Shift + G in the Delphi IDE;
But the methods are just defined as usual.
15.1.2. Implementing an interface with a class
Now that we have an interface, we need to create an implementation.
Our interface is very basic, so we may implement it like this:
typeTServiceCalculator = class(TInterfacedObject, ICalculator)protected
fBulk: string;
publicfunction Add(n1,n2: integer): integer;procedure SetBulk(const aValue: string);
end; function TServiceCalculator.Add(n1, n2: integer): integer;
begin
result := n1+n2;
end; procedure TServiceCalculator.SetBulk(const aValue: string);
begin
fBulk := aValue;
end;
You can note the following:
We added ICalculator name to the class() definition: this class inherits from TInterfacedObject, and implements the ICalculator interface;
Here we have protected and public keywords - but the Add method can have any visibility, from the interface point of view: it will be used as implementation of an interface, even if the method is declared as private in the implementation class;
There is a SetBulk method which is not part of the ICalculator definition - so we can add other methods to the implementation class, and we can even implement several interfaces within the same method (just add other interface names after like class(TInterfacedObject, ICalculator, IAnotherInterface);
There a fBulk protected field member within this class definition, which is not used either, but could be used for the class implementation.
Here we have to code an implementation for the TServiceCalculator.Add() method (otherwise the compiler will complain for a missing method), whereas there is no implementation expected for the ICalculator.Add method - it is perfectly "abstract".
15.1.3. Using an interface
Now we have two ways of using our TServiceCalculator class:
The classic way;
The abstract way (using an interface).
The "classic" way, using an explicit class instance:
function MyAdd(a,b: integer): integer;
var Calculator: TServiceCalculator;
begin
Calculator := TServiceCalculator.Create;
try
result := Calculator.Add(a,b);
finally
Calculator.Free;
end;
end;
Note that we used a try..finally block to protect the instance memory resource.
Then we can use an interface:
function MyAdd(a,b: integer): integer;
var Calculator: ICalculator;
begin
Calculator := TServiceCalculator.Create;
result := Calculator.Add(a,b);
end;
What's up over there?
We defined the local variable as ICalculator: so it will be an interface, not a regular class instance;
We assigned a TServiceCalculator instance to this interface variable: the variable will now handle the instance life time;
We called the method just as usual - in fact, the computation is performed with the same exact expression: result := Calculator.Add(a,b);
We do not need any try...finally block here: in Delphi, interface variables are reference-counted: that is, the use of the interface is tracked by the compiler and the implementing instance, once created, is automatically freed when the compiler realizes that the number of references to a given interface variable is zero;
And the performance cost is negligible: this is more or less the same as calling a virtual method (just one more redirection level).
In fact, the compiler creates an hidden try...finally block in the MyAdd function, and the instance will be released as soon as the Calculator variable is out of scope. The generated code could look like this:
function MyAdd(a,b: integer): integer;
var Calculator: TServiceCalculator;
begin
Calculator := TServiceCalculator.Create;
try
Calculator.FRefCount := 1;
result := Calculator.Add(a,b);
finally
dec(Calculator.FRefCount);
if Calculator.FRefCount=0 then
Calculator.Free;
end;
end;
Of course, this is a bit more optimized than this (and thread-safe), but you have got the idea.
15.1.4. There is more than one way to do it
One benefit of interfaces we have already told about, is that it is "orthogonal" to the implementation.
In fact, we can create another implementation class, and use the same interface:
typeTOtherServiceCalculator = class(TInterfacedObject, ICalculator)protectedfunction Add(n1,n2: integer): integer;end; function TOtherServiceCalculator.Add(n1, n2: integer): integer;
begin
result := n2+n1;
end;
Here the computation is not the same: we use n2+n1 instead of n1+n2... of course, this will result into the same value, but we can use this another method for our very same interface, by using its TOtherServiceCalculator class name:
function MyOtherAdd(a,b: integer): integer;
var Calculator: ICalculator;
beginCalculator := TOtherServiceCalculator.Create;
result := Calculator.Add(a,b);
end;
15.1.5. Here comes the magic
Now you may begin to see the point of using interfaces in a client-server framework like ours.
Our mORMot is able to use the same interface definition on both client and server side, calling all expected methods on both sides, but having all the implementation logic on the server side. The client application will transmit method calls (using JSON instead of much more complicated XML/SOAP) to the server (using a "fake" implementation class created on the fly by the framework), then the execution will take place on the server (with obvious benefits), and the result will be sent back to the client, as JSON. The same interface can be used on the server side, and in this case, execution will be in-place, so very fast.
By creating a whole bunch of interfaces for implementing the business logic of your project, you will benefit of an open and powerful implementation pattern.
More on this later on... first we'll take a look at good principles of playing with interfaces.
15.2. SOLID design principles
The acronym SOLID is derived from the following OOP principles (quoted from the corresponding Wikipedia article):
Single responsibility principle: the notion that an object should have only a single responsibility;
Open/closed principle: the notion that "software entities ... should be open for extension, but closed for modification";
Liskov substitution principle: the notion that "objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program - also named as "design by contract";
Interface segregation principle: the notion that "many client specific interfaces are better than one general purpose interface.";
Dependency inversion principle: the notion that one should "Depend upon Abstractions. Do not depend upon concretions.". Dependency injection is one method of following this principle, which is also called Inversion Of Control (aka IoC).
If you have some programming skills, those principles are general statements you may already found out by yourself. If you start doing serious object-oriented coding, those principles are best-practice guidelines you will definitively gain following.
They certainly help to fight the three main code weaknesses:
Rigidity: Hard to change something because every change affects too many other parts of the system;
Fragility: When you make a change, unexpected parts of the system break;
Immobility: Hard to reuse in another application because it cannot be disentangled from the current application.
15.2.1. Single Responsibility Principle
When you define a class, it shall be designed to implement only one feature. The so-called feature can be seen as an "axis of change" or a "a reason for change".
Therefore:
One class shall have only one reason that justifies changing its implementation;
Classes shall have few dependencies on other classes;
Classes shall be abstract from the particular layer they are running - see Multi-tier architecture.
For instance, a TRectangle object should not have both ComputeArea and Draw methods defined at once - they will define two responsibilities or axis of change: the first responsibility is to provide a mathematical model of a rectangle, and the second is to render it on GUI.
15.2.1.1. Splitting classes
To take an example from real coding, imagine you define a communication component. You want to communicate, say, with a bar-code scanner peripheral. You may define a single class, e.g. TBarcodeScanner, supporting such device connected over a serial port. Later on, the manufacturer deprecates the serial port support, since no computer still have it, and offer only USB models in its catalog. You may inherit from TBarcodeScanner, and add USB support.
SOLID Principles - Single Responsibility: Single-to-rule-them-all classBut in practice, this new TUsbBarCodeScanner class is difficult to maintain, since it will inherit from serial-related communication. So you start splitting the class hierarchy, using an abstract parent class:
SOLID Principles - Single Responsibility: Abstract parent classWe may define some virtual abstract methods, which will be overridden in inherited classes:
Then, TSerialBarCodeScanner and TUsbBarCodeScanner classes will override those classes, according to the final implementation.
In fact, this approach is cleaner. But it is not perfect either, since it may be hard to maintain and extend. Imagine the manufacturer is using a standard protocol for communication, whatever USB or Serial connection is used. You will put this communication protocol (e.g. its state machine, its stream computation, its delaying settings) in the TAbstractBarcodeScanner class. But perhaps they will be diverse flavors, in TSerialBarCodeScanner or TUsbBarCodeScanner, or even due to diverse models and features (e.g. if it supports 2D or 3D bar-codes).
It appears that putting everything in a single class is not a good idea. Splitting protocol and communication appears to be preferred. Each "axis of change" - i.e. every aspect which may need modifications - requires its own class. Then the T*BarcodeScanner classes will compose protocols and communication classes within a single component.
Imagine we have two identified protocols (named BCP1 and BCP2), and two means of communication (serial and USB). So we will define the following classes:
SOLID Principles - Single Responsibility: Spliting protocol and communicationThen, we may define our final classes and components as such:
The actual living connection feature is handled by TSQLDBConnection classes;
And database requests feature is handled by TSQLDBStatement instances using dedicated NewConnection / ThreadSafeConnection / NewStatement methods.
Therefore, you may change how a database connection is defined (e.g. add a property to a TSQLDBConnectionProperties child), and you won't have to change the statement implementation itself.
15.2.1.2. Do not mix UI and logic
Another practical "Single Responsibility Principle" smell may appear in your uses clause.
If your data-only or peripheral-only unit starts like this:
unit MyDataModel; usesWinapi.Windows,
mORMot,
...
It will induce a dependency about the Windows Operating System, whereas your data will certainly benefit from being OS-agnostic. Our todays compiler (Delphi or FPC) targets several OS, so coupling our data to the actual Windows unit does show a bad design.
Similarly, you may add a dependency to the VCL, via a reference to the Forms unit. If your data-only or peripheral-only unit starts like the following, beware!
unit MyDataModel; usesWinapi.Messages,Vcl.Forms,
mORMot,
...
If you later want to use FMX, or LCL (from Lazarus) in your application, or want to use your MyDataModel unit on a pure server application without any GUI, hosted on Windows - or even better on Linux/BSD - you are stuck.
Note that if you are used to developed in RAD mode, the units generated by the IDE wizards come with some default references in the uses clause of the generated .pas file. So take care of not introducing any coupling to your own business code!
As a general rule, our ORM/SOA framework source code tries to avoid such dependencies. All OS-specificities are centralized in our SynCommons.pas unit, and there is no dependency to the VCL when it is not mandatory, e.g. in mORMot.pas.
Following the RAD approach, you may start from your UI, i.e. defining the needed classes in the unit where you visual form (may be VCL or FMX) is defined. Don't follow this tempting, but dangerous path!
Code like the following may be accepted for a small example (e.g. the one supplied in the SQlite3\Samples sub-folder of our repository source code tree), but is to be absolutely avoided for any production ready mORMot-based application:
In your actual project units, when you define an ORM or SOA class, never include GUI methods within. In fact, the fact that our TSQLRecord class definitions are common to both Client and Server sides makes this principle mandatory. You should not have any GUI related method on the Server side, and the Client side could use the objects instances with several GUI implementations (Delphi Client, AJAX Client...).
Therefore, if you want to change the GUI, you won't have to recompile the TSQLRecord class and the associated database model. If you want to deploy your server on a Linux box (using e.g. CrossKylix or FPC as compiler), you could reuse your very same code, since you do not have reference to the VCL in your business code.
This Single responsibility principle may sound simple and easy to follow (even obvious), but in fact, it is one of the hardest principles to get right. Naturally, we tend to join responsibilities in our class definitions. Our framework architecture will enforce you, by its Client-Server nature and all its high-level methods involving interface, to follow this principle, but it is always up to the end coder to design properly his/her types.
15.2.2. Open/Closed Principle
When you define a class or a unit, at the same time:
They shall be open for extension;
But closed for modification.
It means that you may be able to extend your existing code, without breaking its initial behavior. Some other guidelines may be added, but you got the main idea.
Conformance to this open/closed principle is what yields the greatest benefit of OOP, i.e.:
Code re-usability;
Code maintainability;
Code extendibility.
Following this principle will make your code far away from a regular RAD style. But benefits will be huge.
15.2.2.1. Applied to our framework units
When designing our ORM/SOA set of units, we tried to follow this principle. In fact, you should not have to modify its implementation. You should define your own units and classes, without the need to hack the framework source code.
Even if Open Source paradigm allows you to modify the supplied code, this shall not be done unless you are either fixing a bug or adding a new common feature. This is in fact the purpose of our https://synopse.info web site, and most of the framework enhancements have come from user requests.
The framework Open Source license - see below - may encourage user contributions in order to fulfill the Open/closed design principle:
Your application code extends the Synopse mORMot Framework by defining your own classes or event handlers - this is how it is open for extension;
The main framework units shall remain inviolate, and common to all users - this illustrates the closed for modification design.
As a beneficial side effect, this principle will ensure that your code will be ready to follow the framework updates (which are quite regular). When a new version of mORMot is available, you should be able to retrieve it for free from our web site, replace your files locally, then build a new enhanced version of your application, with the benefit of all included fixes and optimizations. Even the source code repository is available - at https://synopse.info/fossil or from https://github.com/synopse/mORMot - and allows you to follow the current step of evolvment of the framework.
In short, abstraction is the key to peace of mind. All your code shall not depend on a particular implementation.
15.2.2.2. Open/Closed in practice
In order to implement this principle, several conventions could be envisaged:
You shall better define some abstract classes, then use specific overridden classes for each and every implementation: this is for instance how Client-Server classes were implemented - see Client-Server process;
All object members shall be declared private or protected - this is a good idea to use Service-Oriented Architecture (SOA) for defining server-side process, and/or make the TSQLRecord published properties read-only and using some client-side constructor with parameters;
No singleton nor global variable - ever;
RTTI is dangerous - that is, let our framework use RTTI functions for its own cooking, but do not use it in your code.
In our previous bar-code scanner class hierarchy, we will therefore define the
In this code, the actual variables are stored as protected fields, with only getters (i.e. read) in the public section. There is no setter (i.e. write) attribute, which may allow to change the fProtocol/fConnection instances in user code. You can still access those fields (it is mandatory in your inherited constructors), but user code should not use it.
As stated above - see SOLID Principles - Single Responsibility: Spliting protocol and communication - having dedicated classes for defining protocol and connection will also help implementing the open/closed principle. You will be able to define a new class, combining its own protocol and connection class instances, so it will be Open for extension. But you will not change the behavior of a class, by inheriting it: since protocol and connection are uncoupled, and used via composition in a dedicated class, it will be Closed for modification.
Using the newest sealed directive for a class may ensure that your class definition will follow this principle. If the class method or property is sealed, you will not be able to change its behavior in its inherited types, even if you are tempted to.
15.2.2.3. No Singleton nor global variables
About the singleton pattern, you should better always avoid it in your code. In fact, a singleton was a C++ (and Java) hack invented to implement some kind of global variables, hidden behind a static class definition. They were historically introduced to support mixed mode of application-wide initialization (mainly allocate the stdio objects needed to manage the console), and were abused in business logic.
Once you use a singleton, or a global variable, you will miss most of the benefit of OOP. A typical use of singleton is to register some class instances globally for the application. You may see some framework - or some part of the RTL - which will allow such global registration. But it will eventually void most benefits of proper dependency injection - see below - since you will not be able to have diverse resolution of the same class.
For instance, if your database properties, or your application configuration are stored within a singleton, or a global variable, you will certainly not be able to use several database at once, or convert your single-user application with its GUI into a modern multi-user AJAX application:
Such global variables are a smell of a broken Open/Closed Principle, since your project will definitively won't be open for extension. Using a static class variable (as allowed in newer version of Delphi), is just another way of defining a global variable, just adding the named scope of the class type.
Even if you do not define some global variable in your code, you may couple your code from an existing global variable. For instance, defining some variables with your TMainForm = class(TForm) class defined in the IDE, then using its global MainForm: TMainForm variable, or the Application.MainForm property, in your code. You will start to feel not right, when the unit where your TMainForm is defined will start to appear in your business code uses clause... just another global variable in disguise!
In our framework, we tried to never use global registration, but for the cases where it has been found safe to be implemented, e.g. when RTTI is cached, or JSON serialization is customized for a given type. All those informations will be orthogonal to the proper classes using them, so you may find some global variables in the framework units, only when it is worth it. For instance, we split TSQLRecord's information into a TSQLRecordProperties for the shared intangible RTTI values, and TSQLModelRecordProperties instances, one per TSQLModel, for all the TSQLModel/TSQLRest specific settings - see Several Models.
15.2.3. Liskov Substitution Principle
Even if her name is barely unmemorable, Barbara Liskov is a great computer scientist, we should better learn from. It is worth taking a look at her presentation at https://www.youtube.com/watch?v=GDVAHA0oyJU
The "Liskov substitution principle" states that, if TChild is a subtype of TParent, then objects of type TParent may be replaced with objects of type TChild (i.e., objects of type TChild may be substitutes for objects of type TParent) without altering any of the desirable properties of that program (correctness, task performed, etc.).
The example given by Barbara Liskov was about stacks and queues: even if both do share Push and Pop methods, they should not inherit from a single parent type, since the storage behavior of a stack is quite the contrary of a queue. In your program, if you start to replace a stack by a queue, you will meet strange behaviors, for sure. According to proper top-bottom design flow, both types should be uncoupled. You may implement a TFastStack class using an in-memory list for storage, or another TPersistedStack class using a remote SQL engine, but both will have to behave like a TStack, i.e. according to the last-in first-out (LIFO) principle. On the other hand, any class implementing a queue type should follow the the first-in first-out (FIFO) order, whatever kind of storage is used.
In practical Delphi code, relying on abstractions may be implemented by two means:
Using only abstract parent class variables when consuming objects;
Using interface variable instead of class implementations.
Here, we do not use inheritance for sharing implementation code, but for defining an expected behavior. Sometimes, you may break the Liskov Substitution principle in implementation methods which will be coded just to gather some reusable pieces of code (the inheritance for implementation pattern), preparing some behavior which may be used only by some of the subtypes. Such "internal" virtual methods of a subtype may change the behavior of its inherited method, for the sake of efficiency and maintainability. But with this kind of implementation inheritance, which is closer to plumbing than designing, methods should be declared as protected, and not published as part of the type definition. By the way, this is exactly what interface type definitions have to offer. You can inherit from another interface, and this kind of polymorphism should strictly follow the Liskov Substitution principle. Whereas the class types, implementing the interfaces, may use some protected methods which may break the principle, for the sake of code efficiency.
In order to fulfill this principle, you should:
Properly name (and comment) your class or interface definition: having Push and Pop methods may be not enough to define a contract, so in this case type inheritance will define the expected expectation - as a consequence, you should better stay away from "duck typing" patterns, and dynamic languages, but rely on strong typing;
Use the "behavior" design pattern, when defining your objects hierarchy - for instance, if a square may be a rectangle, a TSquare object is definitively not a TRectangle object, since the behavior of a TSquare object is not consistent with the behavior of a TRectangle object (square width always equals its height, whereas it is not the case for most rectangles);
Write your tests using abstract local variables (and this will allow test code reuse for all children classes);
Follow the concept of Design by Contract, i.e. the Meyer's rule defined as "when redefining a routine [in a derivative], you may only replace its precondition by a weaker one, and its postcondition by a stronger one" - use of preconditions and postconditions also enforce testing model;
Separate your classes hierarchy: typically, you may consider using separated object types for implementing persistence and object creation (this is the common separation between Factory and Repository patterns).
You may even found in the dddInfraSettings.pas unit a powerful TRestSettings.NewRestInstance() method which is able to instantiate the needed TSQLRest inherited class from a set of JSON settings, i.e. either a TSQLHttpClient, or a local TSQLRestServerFullMemory, or a TSQLRestServerDB - the later either with a local SQlite3 database, an external SQL engine, or an external NoSQL/MongoDB database.
Your code shall refer to abstractions, not to implementations. By using only methods and properties available at classes parent level, your code won't need to change because of a specific implementation.
15.2.3.2. I'm your father, Luke
You should note that, in the Liskov substitution principle definition, "parent" and "child" are no absolute. Which actual class is considered as "parent" may depend on the context use.
Most of the time, the parent may be the highest class in the hierarchy. For instance, in the context of a GUI application, you may use the most abstract class to access the application data, may it be stored locally, or remotely accessed over HTTP.
But when you initialize the class instance of a local stored server, you may need to setup the actual data storage, e.g. the file name or the remote SQL/NoSQL settings. In this context, you will need to access the "child" properties, regardless of the "parent" abstract use which will take care later on in the GUI part of the application.
Furthermore, in the context of data replication, server side or client side will have diverse behavior. In fact, they may be used as master or slave database, so in this case, you may explicitly define server or client class in your code. This is what our ORM does for its master/slave replication - see Master/slave replication.
If we come back to our bar-code scanner sample, most of your GUI code may rely on TAbstractBarcodeScanner components. But in the context of the application options, you may define the internal properties of each "child" class - e.g. the serial or USB port name, so in this case, your new "parent" class may be either TSerialBarCodeScanner or TUsbCodeScanner, or even better the TSerialBarcodeConnection or TUsbBarcodeConnection properties, to fulfill Single Responsibility principle.
15.2.3.3. Don't check the type at runtime
Some patterns shall never appear in your code. Otherwise, code refactoring should be done as soon as possible, to let your project be maintainable in the future.
Statements like the following are to be avoided, in either the parents' or the childs' methods:
procedure TAbstractBarcodeScanner.SomeMethod;
beginif self is TSerialBarcodeScanner thenbegin
....
endelseif self is TUsbBarcodeScanner then
...
Or, in its disguised variation, using an enumerated item:
case fProtocol.MeanOfCommunication of
meanSerial: begin
...
end;
meantUsb:
...
This later piece of code does not check self, but the fProtocol protected field. So even if you try to implement the Single Responsibility principle, you may still be able to break Liskov Substitution!
Note that both patterns will eventually break the Single Responsibility principle: each behavior shall be defined in its own child class methods. As the Open/Closed principle will also be broken, since the class won't be open for extension, without touching the parent class, and modify the nested if self is T* then ... or case fProtocol.* of ... expressions.
15.2.3.4. Partially abstract classes
Another code smell may appear when you define a method which will stay abstract for some children, instantiated in the project. It will imply that some of the parent class behavior is not implemented at this particular hierarchy level. So you will not be able to use all the parent's methods, as will be expected by the Liskov Substitution principle. Note that the compiler will complain for it, hinting that you are creating a class with abstract methods. Never ignore such hints - which may benefit for being handled as errors at compilation time. The (in)famous "Abstract Error" error dialog, which may appear at runtime, will reflect this bad code implementation. When it occurs on a server application without GUI... you got a picture of the terror, I guess...
A more subtle violation of Liskov may appear if you break the expectation of the parent class. The following code, which emulates a bar-code reader peripheral by sending the frame by email for debugging purpose (why not?), clearly fails the Design by Contract approach:
TEMailEmulatedBarcodeProtocol = class(TAbstractBarcodeProtocol)
protectedfunction ReadFrame: TProtocolFrame; override;
procedure WriteFrame(const aFrame: TProtocolFrame); override;
... function TEMailEmulatedBarcodeProtocol.ReadFrame: TProtocolFrame;
beginraise EBarcodeException.CreateUTF8('%.ReadFrame is not implemented!',[self]);
end; procedure TEMailEmulatedBarcodeProtocol.WriteFrame(const aFrame: TProtocolFrame);
beginSendEmail(fEmailNotificationAddress,aFrame.AsString);
end;
We expected this class to fully implement the TAbstractBarcodeProtocol contract, whereas calling TEMailEmulatedBarcodeProtocol.ReadFrame will not be able to read any data frame, but will raise an exception. So we can not use this TEMailEmulatedBarcodeProtocol class as replacement to any other TAbstractBarcodeProtocol class, otherwise it will fail at runtime. A correct implementation may perhaps to define a TFakeBarcodeProtocol class, implementing all the parent methods via a set of events or some text-based scenario, so that it will behave just like a correct TAbstractBarcodeProtocol class, in the full extend of its expectations.
15.2.3.5. Messing units dependencies
Last but not least, if you need to explicitly add child classes units to the parent class unit uses clause, it looks like if you just broke the Liskov Substitution principle.
unit AbstractBarcodeScanner; uses
SysUtils,
Classes,
SerialBarcodeScanner; // Barbara complains: "it smells"!UsbBarcodeScanner; // Barbara complains: "it smells"!
...
If your code is like this, you will have to remove the reference to the inherited classes, for sure.
Even a dependency to one of the low-level implementation detail is to be avoided:
unit AbstractBarcodeScanner; usesWindows,
SysUtils,
Classes,
ComPort;
...
Your abstract parent class should not be coupled to a particular Operating System, or a mean of communication, which may not be needed. Why will you add a dependency to raw RS-232 communication protocol, which is very likely to be deprecated?
One way of getting rid of this dependency is to define some abstract types (e.g. enumerations or simple structures like record), which will then be translated into the final types as expected by the ComPort.pas or Windows.pas units. Consider putting all the child classes dependencies at constructor level, and/or use class composition via the Single Responsibility principle so that the parent class definition will not be polluted by implementation details of its children.
You my also use a registration list, maintained by the parent unit, which may be able to register the classes implementing a particular behavior at runtime. Thanks to Liskov, you will be able to substitute any parent class by any of its inherited implementation, so defining the types at runtime only should not be an issue.
15.2.3.6. Practical advantages
The main advantages of this coding pattern are the following:
Thanks to this principle, you will be able to stub or mock an interface or a class - see below - e.g. uncouple your object persistence to the actual database it runs on: this principle is therefore mandatory for implementing unitary testing to your project;
Furthermore, testing will be available not only at isolation level (testing each child class), but also at abstracted level, i.e. from the client point of view - you can have implementation which behave correctly when tested individually, but which failed when tested at higher level if the Liskov principle was broken;
As we have seen, if this principle is violated, the other principles are very likely to be also broken - e.g. the parent class will need to be modified whenever a new derivative of the base class is defined (violation of the Open/Closed principle), or your class types may implement more than one behavior at a time (violation of the Single Responsibility principle);
Code re-usability is enhanced by method re-usability: a method defined at a parent level does not require to be implemented for each child.
The SOA and ORM concepts, as implemented by our framework, try to be compliant with the Liskov substitution principle. It is true at class level for the ORM, but a more direct Design by Contract implementation pattern is also available, since the whole SOA stack involves a wider usage of interfaces in your projects.
15.2.4. Interface Segregation Principle
This principle states that once an interface has become too 'fat' it shall be split into smaller and more specific interfaces so that any clients of the interface will only know about the methods that pertain to them. In a nutshell, no client should be forced to depend on methods it does not use.
As a result, it will help a system stay decoupled and thus easier to re-factor, change, and redeploy.
15.2.4.1. Consequence of the other principles
Interface segregation should first appear at class level. Following the Single Responsibility principle, you are very likely to define several smaller classes, with a small extent of methods. Then use dedicated types of class, relying on composition to expose its own higher level set of methods.
The bar-code class hierarchy illustrates this concept. Each T*BarcodeProtocol and T*BarcodeConnection class will have its own set of methods, dedicated either to protocol handling, or data transmission. Then the T*BarCodeScanner classes will compose those smaller classes into a new class, with a single event handler:
This single OnBarcodeScanned event will be the published property of the component. Both protocol and connection details will be hidden within the internal classes. The final application will use this event, and react as expected, without actually knowing anything about the implementation details.
15.2.4.2. Using interfaces
The SOA part of the framework allows direct use of interface types to implement services. This great Client-Server SOA implementation pattern - see Server side Services - helps decoupling all services to individual small methods. In this case also, the stateless used design will also reduce the use of 'fat' session-related processes: an object life time can be safely driven by the interface scope.
By defining Delphiinterface instead of plain class, it helps creating small and business-specific contracts, which can be executed on both client and server side, with the same exact code.
Since the framework makes interface consumption and publication very easy, you won't be afraid of exposing your implementation classes as small pertinent interface. For instance, if you want to publish a third-party API, you may consider publishing dedicated interfaces, each depending on every API consumer expectations. So your main implementation logic won't be polluted by how the API is consumed, and, as correlative, the published API may be closer to each particular client needs, without been polluted by the other client needs. DDD will definitively benefit for Interface Segregation, since this principle is the golden path to avoid domain leaking - see below.
15.2.5. Dependency Inversion Principle
Another form of decoupling is to invert the dependency between high and low level of a software design:
High-level modules should not depend on low-level modules. Both should depend on abstractions;
Abstractions should not depend upon details. Details should depend upon abstractions.
The goal of the dependency inversion principle is to decouple high-level components from low-level components such that reuse with different low-level component implementations becomes possible. A simple implementation pattern could be to use only interfaces owned by, and existing only with the high-level component package.
This principle results in Inversion Of Control (aka IoC): since you rely on the abstractions, and try not to depend upon concretions (i.e. on implementation details), you should first concern by defining your interfaces.
15.2.5.1. Upside Down Development
In conventional application architecture, lower-level components are designed to be consumed by higher-level components which enable increasingly complex systems to be built. This design limits the reuse opportunities of the higher-level components, and certainly breaks the Liskov substitution principle.
For our bar-code reader sample, we may be tempted to start from the final TSerialBarcodeScanner we need in our application. We were asked by our project leader to allow bar-code scanning in our flagship application, and the extend of the development has been reduced to support a single model of device, in RS-232 mode - this may be the device already owned by our end customer.
This particular customer may have found some RS-232 bar-code relics from the 90s in its closets, but, as an experience programmer, you know that the next step will be to support USB, in a very close future. All this bar-code reading stuff will be marketized by your company, so it is very likely that another customer will very soon ask for using its own brand new bar-code scanners... which will support only USB.
When you will eventually add USB support, the UI part of the application won't have to be touched. Just implementing your new inherited class, leveraging all previous coding. Following Dependency Inversion from the beginning will definitively save your time. Even in an Agile kind of process - where "Responding to change" is most valuable - the small amount of work on implementing first from the abstraction with the initial implementation will be very beneficial.
In fact, this Dependency Inversion principle is a prerequisite for proper Test-Driven Design. Following this TDD pattern, you first write your test, then fail your test, then write the implementation. In order to write the test, you need the abstracted interface of the feature to be available. So you will start from the abstraction, then write the concretion.
15.2.5.2. Injection patterns
In other languages (like Java or .Net), various patterns such as Plug-in, Service Locator, or Dependency Injection are then employed to facilitate the run-time provisioning of the chosen low-level component implementation to the high-level component.
Our Client-Server architecture facilitates this decoupling pattern for its ORM part, and allows the use of native Delphiinterface to call services from an abstract factory, for its SOA part.
15.3. Circular reference and (zeroing) weak pointers
15.3.1. Weak pointers
The memory allocation model of the Delphiinterface type uses some kind of Automatic Reference Counting (ARC). In order to avoid memory and resource leaks and potential random errors in the applications (aka the terrible EAccessViolation exception on customer side) when using Interfaces, a SOA framework like mORMot has to offer so-called Weak pointers and Zeroing Weak pointers features.
By default in Delphi, all references are defined:
as weak references for pointer and class instances;
with explicit copy for low-level value types like integer, Int64, currency, double or record (and old deprecated object or shortstring);
via copy-on-write with reference counting for high-level value types (e.g. string, widestring, variant or a dynamic array - with the exception of tuned memory handling for TDocVariant custom variant type);
as strong reference with reference counting for interface instances.
The main issue with strong reference counting is the potential circular reference problem. This occurs when an interface has a strong pointer to another, but the target interface has a strong pointer back to the original. Even when all other references are removed, they still will hold on to one another and won't be released. This can also happen indirectly, by a chain of objects that might have the last one in the chain referring back to an earlier object.
See the following interface definition for instance:
The following implementation will definitively leak memory:
procedure TParent.SetChild(const Value: IChild);
begin
FChild := Value;
end; procedure TChild.SetParent(const Value: IParent);
begin
FParent := Value;
end;
In Delphi, most common kind of reference-copy variables (i.e. variant, dynamic array or string) solve this issue by implementing copy-on-write. Unfortunately, this pattern is not applicable to interface, which are not value objects, but reference objects, tied to an implementation class, which can't be copied.
One common solution is to use Weak pointers, by which the interface is assigned to a property without incrementing the reference count.
Note that garbage collector based languages (like Java or C#) do not suffer from this problem, since the circular references are handled by their memory model: objects lifetime are maintained globally by the memory manager. Of course, it will increase memory use, slowdown the process due to additional actions during allocation and assignments (all objects and their references have to be maintained in internal lists), and may slow down the application when garbage collector enters in action. In order to avoid such issues when performance matters, experts tend to pre-allocate and re-use objects: this is one common limitation of this memory model, and why Delphi is still a good candidate (like unmanaged C or C++ - and also Objective C) when it deals with performance and stability. In some cases (e.g. when using an object cache), such languages have to introduce some kind of "weak pointers", to allow some referenced objects to be reclaimed by garbage collection: but it is a diverse mechanism, under the same naming.
15.3.2. Handling weak pointers
In order to easily create a weak pointer, the following function was added to mORMot.pas:
procedureSetWeak(aInterfaceField: PIInterface; const aValue: IInterface);
begin
PPointer(aInterfaceField)^ := Pointer(aValue);
end;
It will assign the interface reference to a field by assigning the pointer of this instance to the internal field. It will by-pass the reference counting, so memory won't be leaked any more.
But there are still some cases where it is not enough. Under normal circumstances, a class instance should not be deallocated if there are still outstanding references to it. But since weak references don't contribute to an interface reference count, a class instance can be released when there are outstanding weak references to it. Some memory leak or even random access violations could occur. A debugging nightmare...
In order to solve this issue, ARC's Zeroing Weak pointers come to mind. It means that weak references will be set to nil when the object they reference is released. When this happens, the automatic zeroing of the outstanding weak references prevents them from becoming dangling pointers. And voilà! No access violation any more!
Such a Zeroing ARC model has been implemented in Objective C by Apple, starting with Mac OS X 10.7 Lion, in replacement (and/or addition) to the previous manual memory handling implementation pattern: in its Apple's flavor, ARC is available not only for interfaces, but for objects, and is certainly more sophisticated than the basic implementation available in the Delphi compiler: it is told (at least from the marketing paper point of view) to use some deep knowledge of the software architecture to provide an accurate access to all instances - whereas the Delphi compiler just relies on a out-of-scope pattern. In regard to classic garbage collector memory model, ARC is told to be much more efficient, due to its deterministic nature: Apple's experts ensure that it does make a difference, in term of memory use and program latency - which both are very sensitive on "modest" mobile devices. In short, thanks to ARC, your phone UI won't glitch during background garbage recycling. So mORMot will try to offer a similar feature, even if the Delphi compiler does not implement it (yet).
In order to easily create a so-called zeroing weak pointer, the following function was defined in mORMot.pas:
We also defined a class helper around the TObject class, to avoid the need of supplying the self parameter, but unfortunately, the class helper implementation is so buggy it won't be even able to compile before Delphi XE version of the compiler. But it will allow to write code as such:
procedure TParent.SetChild(const Value: IChild);
begin
SetWeak0(@FChild,Value);
end;
For instance, the following code is supplied in the regression tests, and will ensure that weak pointers are effectively zeroed when SetWeakZero() is used:
function TParent.HasChild: boolean;
begin
result := FChild<>nil;
end;
Child := nil; // here Child is destroyed
Check(Parent.HasChild=(aWeakRef=weakref),'ZEROed Weak');
Here, aWeakRef=weakref is true when SetWeak() has been called, and equals false when SetWeakZero() has been used to assign the Child element to its Parent interface.
The SetWeak() function itself is very simple. The Delphi RTL/VCL itself use similar code when necessary.
But the SetWeakZero() function has a much more complex implementation, due to the fact that a list of all weak references has to be maintained per class instance, and set to nil when this referring instance is released.
The mORMot implementation tries to implement:
Best performance possible when processing the Zeroing feature;
No performance penalty for other classes not involved within weak references;
Low memory use, and good scalability when references begin to define huge graphs;
Thread safety - which is mandatory at least on the server side of our framework;
Compatible with Delphi 6 and later (avoid syntax tricks like generic).
Some good existing implementations can be found on the Internet:
Andreas Hausladen provided a classical and complete implementation at http://andy.jgknet.de/blog/2009/06/weak-interface-references using some nice tricks (like per-instance optional speed up using a void IWeakInterface interface whose VMT slot will refer to the references list), is thread-safe and is compatible with most Delphi versions - but it will slow down all TObject.FreeInstance calls (i.e. within Free / Destroy) and won't allow any overridden FreeInstance method implementation;
The implementation included within mORMot uses several genuine patterns, when compared to existing solutions:
It will hack the TObject.FreeInstance at the class VMT level, so will only slow down the exact class which is used as a weak reference, and not others (also its inherited classes won't be overridden) - and it will allow custom override of the virtual FreeInstance method;
It makes use of our TDynArrayHashed wrapper to provide a very fast lookup of instances and references, without using generic definitions - hashing will start when it will be worth it, i.e. for any list storing more than 32 items;
The unused vmtAutoTable VMT slot is used to handle the class-specific orientation of this feature (similar to TSQLRecordProperties lookup as implemented for DI # 2.1.3), for best speed and memory use.
See the TSetWeakZeroClass and TSetWeakZeroInstance implementation in mORMot.pas for the details.
15.4. Interfaces in practice: dependency injection, stubs and mocks
In order to fulfill the SOLID design principles, two features are to be available when handling interfaces:
Stubbing and mocking of interfaces for proper testing.
We will show now how mORMot provides all needed features for such patterns, testing a simple "forgot my password" scenario: a password shall be computed for a given user name, then transmitted via SMS, and its record shall be updated in the database.
15.4.1. Dependency Injection at constructors
A direct implementation of dependency injection at a class level can be implemented in Delphi as such:
All external dependencies shall be defined as abstract interface;
An external factory could be used to retrieve an interface instance, or class constructor shall receive the dependencies as parameters.
Using an external factory can be made within mORMot via TServiceFactory - see below. Automated dependency injection is also available via a set of classes, uncoupled from the SOA features of the framework, mainly TInjectableObject and TInterfaceResolver types, and their inherited classes - see below.
Here, we will use the more direct constructor-based pattern for a simple "forgot my password" scenario.
The dependencies are defined with the following two interfaces(only the needed methods are listed here, but a real interface may have much more members, but not too much, to follow the interface segregationSOLID principle):
Here, we won't use TSQLRecord nor any other classes, just plain records, which will be used as neutral means of transmission. The difference between Data Transfer Objects and business objects or Data Access Objects (DAO) like our TSQLRecord is that a DTO does not have any behavior except for storage and retrieval of its own data. It can also be independent to the persistence layer, as implemented underneath our business domain. Using a record in Delphi ensure it won't be part of a complex business logic, but will remain used as value objects.
Now, let's come back to our TLoginController class. Here is the method we want to test:
procedure TLoginController.ForgotMyPassword(const UserName: RawUTF8);
var U: TUser;
begin
U := fUserRepository.GetUserByName(UserName);
U.Password := Int32ToUtf8(Random(MaxInt));
if fSmsSender.Send('Your new password is '+U.Password,U.MobilePhoneNumber) then
fUserRepository.Save(U);
end;
It will retrieve a TUser instance from its repository, then compute a new password, and send it via SMS to the user's mobile phone. On success, it is supposed to persist (save) the new user information to the database.
15.4.2. Why use fake / emulated interfaces?
Using the real implementation of IUserRepository will expect a true database to be available, with some potential issues on existing data. Similarly, the class implementing ISmsSender in the final project should better not to be called during the test phase, since sending a SMS does cost money, and we will need a true mobile phone or Internet gateway to send the password.
For our testing purpose, we only want to ensure that when the "forgot my password" scenario is executed, the user record modification is persisted to the database.
One possibility could be to define two new dedicated classes, implementing both IUserRepository and ISmsSender interfaces. But it will be obviously time consuming and error-prone. This may be typical case when writing the test could be more complex than writing the method to be tested.
In order to maximize your ROI, and allow you to focus on your business logic, the mORMot framework proposes a simple and efficient way of creating "fake" implementations of any interface, just by defining the minimum behavior needed to run the test.
15.4.2.1. Stubs and mocks
In the book "The Art of Unit Testing" (Osherove, Roy - 2009), a distinction is drawn between stub and mock objects:
Stubs are the simpler of the two families of fake objects, simply implementing the same interface as the object that they represent and returning pre-arranged responses. Thus a fake object merely provides a set of method stubs. Therefore the name. In mORMot, it is created via the TInterfaceStub generator;
Mocks are described as a fake object that helps decide if a test failed or passed, by verifying if an interaction on an object occurred or not. Everything else is defined as a stub. In mORMot, it is created via the TInterfaceMock generator, which will link the fake object to an existing TSynTestCase instance - see below.
In practice, there should be only one mock per test, with as many stubs as necessary to let the test pass. Using a mocking/stubbing framework allows quick on-the-fly generation of interface with unique behavior dedicated to a particular test. In short, you define the stubs needed to let your test pass, and define one mock which will pass or fail the test depending on the feature you want to test.
Our mORmot framework follows this distinction, by defining two dedicated classes, named TInterfaceStub and TInterfaceMock, able to define easily the behavior of such classes.
15.4.2.2. Defining stubs
Let's implement our "forgot my password" scenario test.
The TSynTestCase child method could start as such:
It will create a fake class (here called a "stub") emulating the whole ISmsSender interface, store it in the local SmsSender variable, and let its Send method return true.
What is nice with this subbing / mocking implementation is that:
The "fluent" style of coding makes it easy to write and read the class behavior, without any actual coding in Delphi, nor class definition;
Even if ISmsSender has a lot of methods, only Send matters for us: TInterfaceStub will create all those methods, and let them return default values, with additional line of code needed;
Memory allocation will be handled by the framework: when SmsSender instance will be released, the associated TInterfaceStub data will also be freed (and in case a mock, any expectations will be verified).
15.4.2.3. Defining a mock
Now we will define another fake class, which may fail the test, so it is called a "mock", and the mORMot generator class will be TInterfaceMock:
We provide the TMyTest instance as self to the TInterfaceMock constructor, to associate the mocking aspects with this test case. That is, any registered Expects*() rule will let TMyTest.Check() be called with a boolean condition reflecting the test validation status of every rule.
The ExpectsCount() method is indeed where mocking is defined. When the UserRepository generated instance is released, TInterfaceMock will check all the Expects*() rules, and, in this case, check that the Save method has been called exactly one time (qoEqualTo,1).
15.4.2.4. Running the test
Since we have all the expected stub and mock at hand, let's run the test itself:
with TLoginController.Create(UserRepository,SmsSender) dotry
ForgotMyPassword('toto');
finally
Free;
end;
That is, we run the actual implementation method, which will call our fake methods:
procedure TLoginController.ForgotMyPassword(const UserName: RawUTF8);
var U: TUser;
begin
U := fUserRepository.GetUserByName(UserName);
U.Password := Int32ToUtf8(Random(MaxInt));
if fSmsSender.Send('Your new password is '+U.Password,U.MobilePhoneNumber) then
fUserRepository.Save(U);
end;
Let's put all this together.
15.5. Stubs and Mocks in mORMot
Our mORMot framework is therefore able to stub or mock any Delphiinterface.
We will now detail how it is expected to work.
15.5.1. Direct use of interface types without TypeInfo()
First of all, it is a good practice to always register your service interfaces in the unit which define their type, as such:
Then creating a stub or a mock could be done directly from the interface name, which will be transmitted as its TGUID, without the need of using the TypeInfo() pseudo-function:
In the code below, we will assume that the interface type information has been registered, so that we may be able to use directly I* without the TypeInfo(I*) syntax
15.5.2. Manual dependency injection
As usual, the best way to explain what a library does is to look at the code using it.
Here is an example (similar to the one shipped with RhinoMocks) of verifying that when we execute the "forgot my password" scenario as implemented by the TLoginController class, we actually called the Save() method:
procedure TMyTest.ForgotMyPassword;
var SmsSender: ISmsSender;
UserRepository: IUserRepository;
beginTInterfaceStub.Create(ISmsSender,SmsSender).
Returns('Send',[true]);
TInterfaceMock.Create(IUserRepository,UserRepository,self).
ExpectsCount('Save',qoEqualTo,1);
with TLoginController.Create(UserRepository,SmsSender) dotry
ForgotMyPassword('toto');
finally
Free;
end;
end;
And... that's all, since the verification will take place when IUserRepository instance will be released.
If you want to follow the "test spy" pattern (i.e. no expectation defined a priori, but manual check after the execution), you can use:
This is something unique with our library: you can decide if you want to use the classic "expect-run-verify" pattern, or the somewhat more direct "run-verify" / "test spy" pattern. With mORMot, you pick up your mocking class (either TInterfaceMock or TInterfaceMockSpy), then use it as intended. You can even mix the two aspects in the same instance! It is just a matter of taste and opportunity for you to use the right pattern.
For another easier pattern, like the one in the Mockito home page:
If you compare with existing mocking frameworks, even in other languages / platforms like the two above, you will find out that the features included in mORMot are quite complete:
Stubbing of any method, returning default values for results;
Definition of the stubbed behavior via a simple fluent interface, with TInterfaceStub.Returns(), including easy definition of returned results values, for the whole method or following parameters/arguments matchers;
Handle methods with var, out or function result returned values - i.e. not only the function result (as other Delphi implementations does, due to a limitation of the TVirtualInterface standard implementation, on which mORMot does not rely), but all outgoing values, as an array of values;
Stubbed methods can use delegates or event callbacks with TInterfaceStub.Executes() rule definitions, for the whole method or following parameters/arguments matchers, to run a more complex process;
Stubbed methods can also raise exceptions with TInterfaceStub.Raises() rule definitions, for the whole method or following parameters/arguments matchers, if this is the behavior to be tested;
Mocks are directly linked to mORMot's unitary tests / test-driven classes - see below;
Mocked methods can trigger test case failure with TInterfaceMock.Fails() definitions, for the whole method or following parameters/arguments matchers;
Mocking via "expect-run-verify" or "run-verify" (aka "test spy") patterns, on choice, depending on your testing expectations;
Mocking validation against number of execution of a method, or a method with arguments/parameters matchers, or the global execution trace - in this case, pass count can be compared with operators like < <= = <> > >= and not only the classic exact-number-of-times and at-least-once verifications;
Most common parameters and results can be defined as simple array of const in the Delphi code, or by supplying JSON arrays (needed e.g. for more complex structures like record values);
Execution trace retrieval in easy to read or write text format (and not via complex "fluent" interface e.g. with When clauses);
Auto-release of the TInterfaceStubTInterfaceMockTInterfaceMockSpy generator instance, when the interface is no longer required, to minimize the code to type, and avoid potential memory leaks;
Works from Delphi 6 up to the latest available Delphi version - since no use of syntax sugar like generics, nor the RTTI.pas features;
Very good performance (the faster Delphi mocking framework, for sure), due to very low overhead and its reuse of mORMot's low-level interface-based services kernel using JSON serialization, which does not rely on the slow and limited TVirtualInterface.
15.5.3. Stubbing complex return values
Just imagine that the ForgotMyPassword method does perform an internal test:
procedure TLoginController.ForgotMyPassword(const UserName: RawUTF8);
var U: TUser;
begin
U := fUserRepository.GetUserByName(UserName);
Assert(U.Name=UserName);
U.Password := Int32ToUtf8(Random(MaxInt));
if fSmsSender.Send('Your new password is '+U.Password,U.MobilePhoneNumber) then
fUserRepository.Save(U);
end;
This will fail the test for sure, since by default, GetUserByName stubbed method will return a valid but void record. It means that U.Name will equal '', so the highlighted line will raise an EAssertionFailed exception.
Here is how we may enhance our stub, to ensure it will return a TUser value matching U.Name='toto':
The only trick in the above code is that we use RecordSaveJSON() function to compute the internal JSON representation of the record, as expected by mORMot's data marshalling.
15.5.4. Stubbing via a custom delegate or callback
In some cases, it could be very handy to define a complex process for a given method, without the need of writing a whole implementation class.
A delegate or event callback can be specified to implement this process, with three parameters marshalling modes:
Via some Named[] variant properties (which are the default for the Ctxt callback parameter) - the easiest and safest to work with;
Via some Input[] and Output[] variant properties;
Directly as a JSON array text (the fastest, since native to the mORMot core).
Let's emulate the following behavior:
function TServiceCalculator.Subtract(n1, n2: double): double;
begin
result := n1-n2;
end;
15.5.4.1. Delegate with named variant parameters
You can stub a method using a the Named[] variant arrays as such:
If the execution fails, it shall execute Ctxt.Error() method with an associated error message to notify the stubbing process of such a failure.
Using named parameters has the advantage of being more explicit in case of change of the method signature (e.g. if you add or rename a parameter). It should be the preferred way of implementing such a callback, in most cases.
15.5.4.2. Delegate with indexed variant parameters
There is another way of implementing such a callback method, directly by using the Input[] and Output[] indexed properties. It should be (a bit) faster to execute:
Just as with TOnInterfaceStubExecuteParamsJSON implementation, Input[] index follows the exact order of const and var parameters at method call, and Output[] index follows the exact order of var and out parameters plus any function result.
That is, if you call:
function Subtract(n1,n2: double): double;
...
MyStub.Substract(100,20);
That is, it shall parse incoming parameters from Ctxt.Params, and store the result values as a JSON array in Ctxt.Result.
Input parameter order in Ctxt.Params follows the exact order of const and var parameters at method call, and output parameter order in Ctxt.Returns([]) or Ctxt.Result follows the exact order of var and out parameters plus any function result.
This method could have been written as such, if you prefer to return directly the JSON array:
This may sound somewhat convenient here in case of double values, but it will be error prone if types are more complex. In all cases, using Ctxt.Returns([]) is the preferred method.
15.5.4.4. Accessing the test case when mocking
In case of mocking, you may add additional verifications within the implementation callback, as such:
Here, an additional callback-private parameter containing 'toto' has been specified at TInterfaceMock definition. Then its content is checked on the associated test case via Ctxt.Sender instance. If the caller is not a TInterfaceMock, it will raise an exception when accessing the Ctxt.TestCase property.
15.5.5. Calls tracing
As stated above, mORMot is able to log all interface calls into its internal TInterfaceStub's structures. This is indeed the root feature of its "test spy" TInterfaceMockSpy.Verify() methods.
Here above, we retrieved the whole call stack, including input parameters and returned results, as an easy-to-read JSON content. We found out that JSON is a very convenient way of tracing the method calls, both efficient for the computer and the human being hardly testing the code.
A more complex trace verification could be defined for instance, in the context of an interface mock:
TInterfaceMock.Create(ICalculator,I,self).
Returns('Add','30').
Returns('Multiply',[60]).
Returns('Multiply',[2,35],[70]).
ExpectsCount('Multiply',qoEqualTo,2).
ExpectsCount('Subtract',qoGreaterThan,0).
ExpectsCount('ToTextFunc',qoLessThan,2).
// check trace for a whole method executionExpectsTrace('Add','Add(10,30)=[30]').ExpectsTrace('Multiply','Multiply(10,30)=[60],Multiply(2,35)=[70]').// check trace for a whole method execution, filtering with given parametersExpectsTrace('Multiply',[10,30],'Multiply(10,30)=[60]').// check trace for the whole interface executionExpectsTrace('Add(10,30)=[30],Multiply(10,30)=[60],'+'Multiply(2,35)=[70],Subtract(2.3,1.2)=[0],ToTextFunc(2.3)=["default"]').
Returns('ToTextFunc',['default']);
Check(I.Add(10,30)=30);
Check(I.Multiply(10,30)=60);
Check(I.Multiply(2,35)=70);
Check(I.Subtract(2.3,1.2)=0,'Default result');
Check(I.ToTextFunc(2.3)='default');
The overloaded ExpectsTrace() methods are able to add some checks not only about the number of calls of a given method, but the exact order of the executed commands, with associated parameters and all retrieved result values. They can validate the trace of one specific method (optionally with a filter against the incoming parameters), or globally for the whole mocked interface.
Note that internally, those methods will compute a Hash32() hash value of the expected trace, which is a good way of minimizing data in memory or re-use a value retrieved at execution time for further regression testing. Some overloaded signatures are indeed available to directly specify the expected Hash32() value, in case of huge regression scenarios: run the test once, debugging all expected behavior by hand, then store the hash value to ensure that no expected step will be broken in the future.
You have even a full access to the internal execution trace, via the two TInterfaceStub.Log and LogCount properties. This will allow any validation of mocked interface calls logic, beyond ExpectsTrace() possibilities.
15.6. Dependency Injection and Interface Resolution
In our example, we injected the dependencies explicitly as parameters to the class constructor - see Dependency Injection at constructors. We will present below, in a dedicated chapter, how the framework SOA features do resolve services as interfaces.
But real-world application may be much complex, and a generic way of resolving dependencies, and Inversion Of Control (aka IoC) has been implemented.
First of all, if you inherit from TInjectableObject, you will be able to resolve dependencies in two ways:
Explicitly via its Resolve() overloaded methods, for lazy initialization of any registered interface;
Automatically at instance creation, for all its published properties declared with an interface type.
A dedicated set of overloaded constructors is also available at TInjectableObject class level, so that you may be able to easily stub/mock or inject any instance, e.g. for testing purposes:
This test case (TMyTestCase inherits from TSynTestCase) will create a TServiceToBeTested instance, create a TInterfaceStub for its ICalculator dependency, then a TInterfaceMock expecting the IPersistence.SaveItem method to be called exactly one time, allowing resolution from a TSQLRest.Services SOA resolver, and injecting a pre-existing AnyInterfacedObject TInterfacedObject instance.
Then, dependency resolution may take place as published properties:
type
TServiceToBeTested = class(TInjectableObject)
protected
fCalculator: ICalculator;
...
publishedproperty Calculator: ICalculatorread fCalculator;
...
end;
... function TServiceToBeTested.DoCalculation(a,b: integer): integer;
begin
result := Calculator.Add(a,b);
end;
This fCalculator instance will be resolved and instantiated by TInjectableObject.Create, then released as any regular interface field in the classdestructor. You do not have to overload the TServiceToBeTested constructor, nor manage this fCalculator life time. Its auto-created instance will be shared by the whole TServiceToBeTested context, so it should be either stateless (like adding two numbers), or expected to evolve at each use.
Sometimes, there may be an over-cost to initialize such properties each time a TServiceToBeTested class instance is created. Or maybe the interface implementation is not stateless, and a new instance should be retrieved before each use. As an alternative, any interface may be resolved on need, in a lazy way:
procedure TServiceToBeTested.DoSomething;
var persist: IPersistence;
beginResolve(IPersistence,persist);
persist.SaveItem('John','Doe');
end;
The TInjectableObject.Resolve() overloaded methods will retrieve one instance of the asked interface. The above code will raise an exception if the supplied IPersistence was not previously registered to the TInjectableObject class.
When such an TInjectableObject instance is created within mORMot's SOA methods (i.e. TSQLRest.Services property), the injection will transparently involve all registered classes. Also take a look at the TInterfaceResolverInjected.RegisterGlobal() overloaded methods, which are able to register some class types or instances globally for the whole executable context. Just make sure that you won't break the Open/Closed Principle, by defining such a global registration, which should occur only for specific needs, truly orthogonal to the whole application, or specific to a test case.
16. Client-Server services via interfaces
Adopt a mORMotIn real world, especially when your application relies heavily on services, the Client-Server services via methods implementation pattern has some drawbacks:
Most content marshalling is to be done by hand, so may introduce implementation issues;
Client and server side code does not have the same implementation pattern, so you will have to code explicitly data marshalling twice, for both client and server (DataSnap and WCF both suffer from a similar issue, by which client classes shall be coded separately, most time generated by a Wizard);
You can not easily test your services, unless you write a lot of code to emulate a "fake" service implementation;
The services do not have any hierarchy, and are listed as a plain list, which is not very convenient;
It is difficult to synchronize several service calls within a single context, e.g. when a workflow is to be handled during the application process (you have to code some kind of state machine on both sides, and define all session handling by hand);
Security is handled globally for the user, or should be checked by hand in the implementation method (using the Ctxt.Session* members);
There is no way of implementing service callbacks, using e.g. WebSockets.
You can get rid of those limitations with the interface-based service implementation of mORMot. For a detailed introduction and best practice guide to SOA, see Service-Oriented Architecture (SOA). All commonly expected SOA features are now available in the current implementation of the mORMot framework (including service catalog aka "broker", via the optional publication of interface signatures).
16.1. Implemented features
Here are the key features of the current implementation of services using interfaces in the Synopse mORMot framework, as implemented in mORMot.pas unit:
Feature
Remarks
Service Orientation
Allow loosely-coupled relationship
Design by contract
Service Contracts are defined in Delphi code as standard interface custom types
Factory driven
Get an implementation instance from a given interface
Server factory
You can get an implementation on the server side
Client factory
You can get a "fake" implementation on the client side, remotely calling the server to execute the process
Cross-platform clients
A mORMot server is able to generate cross-platform client code via a set of templates - see below
Auto marshalling
The contract is transparently implemented: no additional code is needed e.g. on the client side, and will handle simple types (strings, numbers, dates, sets and enumerations) and high-level types (objects, collections, records, dynamic arrays, variants) from Delphi 6 up to the latest available Delphi version
Flexible
Methods accept per-value or per-reference parameters
Instance lifetime
An implementation class can be: - Created on every call, - Shared among all calls, - Shared for a particular user or group, - Dedicated to the thread it runs on, - Alive as long as the client-side interface is not released, - Or as long as an authentication session exists
Stateless
Following a standard request/reply pattern
Statefull
Server side implementation may be synchronized with client-side interface, e.g. over WebSockets
Dual way
You can define callbacks, using e.g. WebSockets for immediate notification
Signed
The contract is checked to be consistent before any remote execution
Secure
Every service and/or methods can be enabled or disabled on need
Services are hosted by default within the main ORM server, but can have their own process, with a dedicated connection to the ORM core
Broker ready
Service meta-data can be optionally revealed by the server
Multiple transports
All Client-Server protocols of mORMot are available, i.e. direct in-process connection, Windows Messages, named pipes, TCP/IP-HTTP
JSON based
Transmitted data uses JavaScript Object Notation
Routing choice
Services are identified either at the URI level (the RESTful way), or in a JSON-RPC model (the AJAX way), or via any custom format (using class inheritance)
AJAX and RESTful
JSON and HTTP combination allows services to be consumed from AJAX rich clients
Light & fast
Performance and memory consumption are very optimized, in order to ensure scalability and ROI
16.2. How to make services
The typical basic tasks to perform are the following:
Define the service contract;
Implement the contract;
Configure and host the service;
Build a client application.
We will describe those items.
16.3. Defining a service contract
In a SOA, services tend to create a huge list of operations. In order to facilitate implementation and maintenance, operations shall be grouped within common services.
Before defining how such services are defined within mORMot, it is worth applying the Service-Oriented Architecture (SOA) main principles, i.e. loosely-coupled relationship. When you define mORMOt contracts, ensure that this contract will stay un-coupled with other contracts. It will help writing SOLID code, enhance maintenability, and allow introducing other service providers on demand (some day or later, you'll certainly be asked to replace one of your service with a third-party existing implementation of the corresponding feature: you shall at least ensure that your own implementation will be easily re-coded with external code, using e.g. a SOAP/WSDL gateway).
16.3.1. Define an interface
The service contract is to be defined as a plain Delphiinterface type. In fact, the sample type as stated above - see Interfaces - can be used directly:
typeICalculator = interface(IInvokable)
['{9A60C8ED-CEB2-4E09-87D4-4A16F496E5FE}']
/// add two signed 32-bit integersfunction Add(n1,n2: integer): integer;
end;
This ICalculator.Add method will define one "Add" operation, under the "ICalculator" service (which will be named internally 'Calculator' by convention). This operation will expect two numbers as input, and then return the sum of those numbers.
The current implementation of service has the following expectations:
Any interface inheriting from IInvokable, with a GUID, can be used - we expect the RTTI to be available, so IInvokable is a good parent type;
You can inherit an interface from an existing one: in this case, the inherited methods will be part of the child interface, and will be expected to be implemented (just as with standard Delphi code);
Only plain ASCII names are allowed for the type definition (as it is conventional to use English spelling for service and operation naming);
Calling convention shall be register (the Delphi's default) - nor stdcall nor cdecl is available yet, but this won't be a restriction since the interface definition is dedicated to Delphi code scope;
Methods can have a result, and accept per-value or per-reference parameters.
In fact, parameters expectations are the following:
Simple types (strings, numbers, dates, sets and enumerations) and high-level types (objects, collections, records and dynamic arrays) are handled - see below for the details;
They can be defined as const, var or out - in fact, const and var parameters values will be sent from the client to the server as JSON, and var and out parameters values will be returned as JSON from the server;
procedure or function kind of method definition are allowed;
Only exception is that you can't have a function returning a class instance (how will know when to release the instance in this case?), but such instances can be passed as const, var or out parameters (and published properties will be serialized within the JSON message);
In fact, the TCollection kind of parameter is not directly handled by the framework: you shall define a TInterfacedCollection class, overriding its GetClass abstract virtual method (otherwise the server side won't be able to create the kind of collection as expected);
Special TServiceCustomAnswer kind of record can be used as function result to specify a custom content (with specified encoding, to be used e.g. for AJAX or HTML consumers) - in this case, no var nor out parameters values shall be defined in the method (only the BLOB value is returned).
16.3.2. Service Methods Parameters
Handled types of parameters are:
Delphi type
Remarks
boolean
Transmitted as JSON true/false
integer cardinal Int64 double currency
Transmitted as JSON numbers
enumerations
Transmitted as JSON number
set
Transmitted as JSON number - one bit per element (up to 32 elements)
Transmitted as UTF-8 JSON text, but prior to Delphi 2009, the framework will ensure that both client and server sides use the same ANSI code page - so you should better use RawUTF8 everywhere
UTF-8 buffer transmitted with no serialization (wheras a RawUTF8 will be escaped as a JSON string) - expects to contain valid JSON content, e.g. for TSQLTableJSON requests
Need to have RTTI (so a string or dynamic array field within), just like with regular Delphiinterface expectations - transmitted as binary with Base64 encoding before Delphi 2010, or as JSON object thanks to the enhanced RTTI available since, or via an custom JSON serialization - see Record serialization
If used as a function result (not as parameter), the supplied content will be transmitted directly to the client (with no JSON serialization); in this case, no var nor out parameters are allowed in the method - it will be compatible with both our TServiceFactoryClient implementation, and any other service consumers (e.g. AJAX)
interface
A callback instance could be specified, to allow asynchronous notification, using e.g. WebSockets - see below
You can therefore define complex interface types, as such:
typeICalculator = interface(IInvokable)
['{9A60C8ED-CEB2-4E09-87D4-4A16F496E5FE}']
/// add two signed 32-bit integersfunction Add(n1,n2: integer): integer;
/// multiply two signed 64-bit integersfunction Multiply(n1,n2: Int64): Int64;
/// substract two floating-point valuesfunction Subtract(n1,n2: double): double;
/// convert a currency value into textprocedureToText(Value: Currency; var Result: RawUTF8);
/// convert a floating-point value into textfunction ToTextFunc(Value: double): string;
/// do some work with strings, sets and enumerates parameters,// testing also var (in/out) parameters and set as a function resultfunction SpecialCall(Txt: RawUTF8; var Int: integer; var Card: cardinal; field: TSynTableFieldTypes;
fields: TSynTableFieldTypes; var options: TSynTableFieldOptions): TSynTableFieldTypes;
/// test integer, strings and wide strings dynamic arrays, together with recordsfunction ComplexCall(const Ints: TIntegerDynArray; Strs1: TRawUTF8DynArray;
var Str2: TWideStringDynArray; const Rec1: TVirtualTableModuleProperties;
var Rec2: TSQLRestCacheEntryValue): TSQLRestCacheEntryValue;
/// test variant kind of parametersfunction TestVariants(const Text: RawUTF8; V1: variant; var V2: variant): variant;
/// validates ArgsInputIsOctetStream raw binary uploadfunction DirectCall(const Data: TSQLRawBlob): integer;
end;
Note how SpecialCall and ComplexCall methods have quite complex parameters definitions, including dynamic arrays, sets and records. DirectCall will use binary POST, by-passing Base64 JSON encoding - see below. The framework will handle const and var parameters as expected, i.e. as input/output parameters, also on the client side. Any simple types of dynamic arrays (like TIntegerDynArray, TRawUTF8DynArray, or TWideStringDynArray) will be serialized as plain JSON arrays - the framework is able to handle any dynamic array definition, but will serialize those simple types in a more AJAX compatible way, thanks to the enhanced RTTI available since to Delphi 2010.
16.3.3. TPersistent / TSQLRecord parameters
As stated above, mORMot does not allow a method function to return a class instance.
That is, you can't define such a method:
ICustomerFactory = interface(IInvokable)
['{770D009F-15F4-4307-B2AD-BBAE42FE70C0}']
function NewCustomer: TCustomer;
end;
Who will be in charge of freeing the instance, in client-server mode? There is no standard allocation scheme, in Delphi, for such parameters. So every TObject parameter instance shall be managed by the caller, i.e. allocated before the call and released after it. The method will just read or write the instance published properties, and serialize them as JSON.
Note that here the out keyword does not indicate how the memory is allocated, but shows the communication direction of the remote service, i.e. it will serialize the object at method return. The caller shall instantiate an instance before call - whereas for "normal" Delphi code, it may be up to the method to instantiate the instance, and return it.
Or, using both Factory and Repository patterns, as proposed by below:
var Factory: ICustomerFactory;
Repository: ICustomerRepository;
Customer: TCustomer;
...
Factory.NewCustomer(Customer); // get a new object instancetry
Customer.FirstName := StringToUTF8(EditFirstName.Text);
Customer.LastName := StringToUTF8(EditLastName.Text);
NewCutomerID := Repository.Save(Customer); // persist the objectfinally
Customer.Free; // properly manage memoryend;
In real live, it may be very easy to wrongly write a server method returning an existing instance, which will be released by the server SOA caller, and will trigger unexpected A/V randomly - very difficult to track - on the server side. Which is what we want to avoid... Whereas a pointer to nil gives always a clear access violation on the client side, which doesn't affect the server. So this requirement/limitation was designed as such to make the server side more resilient to errors, even if the client side is a bit more complex to work with. Usually, on the client side, you can safely pre-allocate your object instances, and reuse them.
16.3.4. Record parameters
By default, any record parameter or function result will be serialized with a proprietary binary (and optimized) layout, then transmitted as a JSON string, after Base64 encoding.
Even if older versions of Delphi are not able to generate the needed RTTI information for such serialization, allowing us only to use an efficient but proprietary binary layout, the mORMot framework offers a common way of implementing any custom serialization of records. See Record serialization.
Note that the callback signature used for records matches the one used for dynamic arrays serializations - see Dynamic array serialization - as it will be shared between the two of them.
When records are used as Data Transfer Objects within services (which is a good idea in common SOA implementation patterns), such a custom serialization format can be handy, and makes more natural service consumption with AJAX clients.
16.3.5. TCollection parameters
16.3.5.1. Use of TCollection
With mORMot services, you are able to define such a contract, e.g. for a TCollTests collection of TCollTest items:
procedure Collections(Item: TCollTest; var List: TCollTests; out Copy: TCollTests);
Typical implementation of this contract may be:
procedure TServiceComplexCalculator.Collections(Item: TCollTest;
var List: TCollTests; out Copy: TCollTests);
beginCopyObject(Item,List.Add);
CopyObject(List,Copy);
end;
That is, it will append the supplied Item object to the provided List content, then return a copy in the Copy content:
Setting Item without var or out specification is doing the same as const: it will be serialized from client to server (and not back from server to client);
Setting List as var parameter will let this collection to be serialized from client to server, and back from server to the client;
Setting Copy as out parameter will let this collection to be serialized only from server to client.
Note that const / var / out kind of parameters are used at the contract level in order to specify the direction of serialization, and not as usual (i.e. to define if it is passed by value or by reference). All class parameters shall be instantiated before method call: you can not pass any object parameter as nil (nor use it in a function result): it will raise an error.
Due to the current implementation pattern of the TCollection type in Delphi, it was not possible to implement directly this kind of parameter.
In fact, the TCollection constructor is defined as such:
And, on the server side, we do not know which kind of TCollectionItemClass is to be passed. Therefore, the TServiceFactoryServer is unable to properly instantiate the object instances, supplying the expected item class.
All other methods and properties (like GetColItem / Add / Items[]) are to be defined as usual.
16.3.5.3. Register a TCollection type
The other way of using TCollection kind of parameters is to declare it explicitly to the framework. You should call JSONSerializer.RegisterCollectionForJSON() with the corresponding TCollection / TCollectionItem class type pair.
Consider a dedicated class:
TMyCollection = type(TCollection)
Note that a dedicated type is needed here. You just can't use this registration over a plain TCollection.
MyColl := TMyCollection.Create(TMyCollectionItem);
MyColl := ClassInstanceCreate(TMyCollection) as TMyCollection;
MyColl := ClassInstanceCreate('TMyCollection') as TMyCollection;
The last two will retrieve the associated TMyCollectionItem class type from the previous registration.
Thanks to this internal registration table, mORMot will be able to serialize and unserialize plain TCollection type.
16.4. Server side
16.4.1. Implementing the service contract
In order to have an operating service, you'll need to implement a Delphi class which matches the expected interface.
In fact, the sample type as stated above - see Interfaces - can be used directly:
type
TServiceCalculator = class(TInterfacedObject, ICalculator)
publicfunction Add(n1,n2: integer): integer;
end; function TServiceCalculator.Add(n1, n2: integer): integer;
begin
result := n1+n2;
end;
And... That is all we need. The Delphi IDE will check at compile time that the class really implements the specified interface definition, so you'll be sure that your code meets the service contract expectations. Exact match (like handling type of parameters) will be checked by the framework when the service factory will be initialized, so you won't face any runtime exception due to a wrong definition.
Here the class inherits from TInterfacedObject, but you could use any plain Delphi class: the only condition is that it implements the ICalculator interface.
16.4.2. Set up the Server factory
In order to have a working service, you'll need to initialize a server-side factory, as such:
The Server instance can be any TSQLRestServer inherited class, implementing any of the supported protocol of mORMot's Client-Server process, embedding a full SQLite3 engine (i.e. a TSQLRestServerDB class) or a lighter in-memory engine (i.e. a TSQLRestServerFullMemory class - which is enough for hosting services with authentication).
The code line above will register the TServiceCalculator class to implement the ICalculator service, with a single shared instance life time (specified via the sicShared parameter). An optional time out value can be specified, in order to automatically release a deprecated instance after some inactivity.
Whenever a service is executed, an implementation class is to be available. The life time of this implementation class is defined on both client and server side, by specifying a TServiceInstanceImplementation value. This setting must be the same on both client and server sides (it will be checked by the framework).
16.4.3. Instances life time implementation
The available instance management options are the following:
Lifetime
Description
sicSingle
One class instance is created per call: - This is the most expensive way of implementing the service, but is safe for simple workflows (like a one-type call); - This is the default setting for TSQLRestServer.ServiceRegister/ServiceDefine methods.
sicShared
One object instance is used for all incoming calls and is not recycled subsequent to the calls
sicClientDriven
One object instance will be created in synchronization with the client-side lifetime of the corresponding interface: when the interface will be released on client (either when it comes out of scope or set to nil), it will be released on the server side - a numerical identifier will be transmitted with all JSON requests
sicPerSession
One object instance will be maintained during the whole running session
sicPerUser
One object instance will be maintained and associated with the running user
sicPerGroup
One object instance will be maintained and associated with the running user's authorization group
sicPerThread
One object instance will be maintained and associated with the running thread
Of course, sicPerSession, sicPerUser and sicPerGroup modes will expect a specific user to be authenticated. Those implementation patterns will therefore only be available if the RESTful authentication is enabled between client and server.
Typical use of each mode may be the following:
Lifetime
Use case
sicSingle
An asynchronous process (may be resource consuming)
sicShared
Either a very simple process, or requiring some global data
sicClientDriven
The best candidate to implement a Business Logic workflow
sicPerSession
To maintain some data specific to the client application
sicPerUser
Access to some data specific to one user
sicPerGroup
Access to some data shared by a user category (e.g. administrators, or guests)
sicPerThread
Thread-oriented process (e.g. for proper library initialization)
In the current implementation of the framework, the class instance is allocated in memory.
This has two consequences:
In client-server architecture, it is very likely that a lot of such instances will be created. It is therefore mandatory that it won't consume a lot of resource, especially with long-term life time: e.g. you should not store any BLOB within these instances, but try to restrict the memory use to the minimum. For a more consuming operation (a process which may need memory and CPU power), the sicSingle mode is preferred.
There is no built-in data durability yet: service implementation shall ensure that data remaining in memory between calls (i.e. when not defined in sicSingle mode) won't be missing in case of server shutdown. It is up to the class to persist the needed data - using e.g. Object-Relational Mapping.
Note also that all those life-time modes expect the method implementation code to be thread-safe and reintrant on the server side - only exceptions are sicSingle mode, which will have its own running instance, and sicPerThread, which will have its methods always run in the same thread context. In practice, the same user can open more than one connection, therefore it is recommended to protect all implementation class method process, or set the execution options as expected - see below.
In order to illustrate sicClientDriven implementation mode, let's introduce the following interface and its implementation (extracted from the supplied regression tests of the framework):
Purpose of this interface is to store a complex number within its internal fields, then retrieve their values, and define a "Add" method, to perform an addition operation. We used properties, with associated getter and setter methods, to provide object-like behavior on Real and Imaginary fields, in the code.
This interface is implemented on the server side by the following class:
type
TServiceComplexNumber = class(TInterfacedObject,IComplexNumber)
private
fReal: double;
fImaginary: double;
function GetImaginary: double;
function GetReal: double;
procedure SetImaginary(const Value: double);
procedure SetReal(const Value: double);
publicprocedure Assign(aReal, aImaginary: double);
procedure Add(aReal, aImaginary: double);
property Real: double read GetReal write SetReal;
property Imaginary: double read GetImaginary write SetImaginary;
end; { TServiceComplexNumber } procedure TServiceComplexNumber.Add(aReal, aImaginary: double);
begin
fReal := fReal+aReal;
fImaginary := fImaginary+aImaginary;
end; procedure TServiceComplexNumber.Assign(aReal, aImaginary: double);
begin
fReal := aReal;
fImaginary := aImaginary;
end; function TServiceComplexNumber.GetImaginary: double;
begin
result := fImaginary;
end; function TServiceComplexNumber.GetReal: double;
begin
result := fReal;
end; procedure TServiceComplexNumber.SetImaginary(const Value: double);
begin
fImaginary := Value;
end; procedure TServiceComplexNumber.SetReal(const Value: double);
begin
fReal := Value;
end;
This interface is registered on the server side as such:
Using the sicClientDriven mode, also the client side will be able to have its own life time handled as expected. That is, both fReal and fImaginary field will remain allocated on the server side as long as needed. A time-out driven garbage collector will delete any un-closed pending session, therefore release resources allocted in sicClientDriven mode, even in case of a broken connection.
16.4.4. Accessing low-level execution context
16.4.4.1. Retrieve information from the global ServiceContext
When any interface-based service is executed, a global threadvar named ServiceContext can be accessed to retrieve the currently running context on the server side.
You will have access to the following information, which could be useful for sicPerSession, sicPerUser and sicPerGroup instance life time modes:
TServiceRunningContext = record/// the currently running service factory// - it can be used within server-side implementation to retrieve the// associated TSQLRestServer instance// - note that TServiceFactoryServer.Get() won't override this value, when// called within another service (i.e. if Factory is not nil)
Factory: TServiceFactoryServer;
/// the currently runnning context which launched the method// - low-level RESTful context is also available in its Call member// - Request.Server is the safe access point to the underlying TSQLRestServer,// unless the service is implemented via TInjectableObjectRest, so the// TInjectableObjectRest.Server property is preferred// - make available e.g. current session or authentication parameters// (including e.g. user details via Request.Server.SessionGetUser)
Request: TSQLRestServerURIContext;
/// the thread which launched the request// - is set by TSQLRestServer.BeginCurrentThread from multi-thread server// handlers - e.g. TSQLite3HttpServer or TSQLRestServerNamedPipeResponse
RunningThread: TThread;
end;
When used, a local copy or a PServiceRunningContext pointer should better be created, since accessing a threadvar has a non negligible performance cost.
If your code is compiled within some packages, threadvar read won't work, due to a Delphi compiler/RTL restriction (bug?). In such case, you have to call the following function instead of directly access the threadvar:
Note that this global threadvar is reset to 0 outside an interface-based service method call. It will therefore be useless to read it from a method-based service, for instance.
16.4.4.2. Implement your service from TInjectableObjectRest
An issue with the ServiceContext threadvar is that the execution context won't be filled when a SOA method is executed outside a client/server context, e.g. if the TSQLRestServer instance did resolve itself its dependencies using Services.Resolve().
A safer (and slightly faster) alternative is to implement your service by inheriting from the TInjectableObjectRest class. This class has its own Resolve() overloaded methods (inherited from TInjectableObject), but also two additional properties:
Those properties will be injected by TServiceFactoryServer.CreateInstance, i.e. when the service implementation object will be instantiated on the server side. They will give direct and safe access to the underlying REST server, e.g. all its ORM methods.
16.4.5. Using services on the Server side
Once the service is registered on the server side, it is very easy to use it in your code.
In a complex Service-Oriented Architecture (SOA), it is not a good practice to have services calling each other. Code decoupling is a key to maintainability here. But in some cases, you'll have to consume services on the server side, especially if your software architecture has several layers (like in a Domain-Driven Design): your application services could be decoupled, but the Domain-Driven services (those implementing the business model) could be on another Client-Server level, with a dedicated protocol, and could have nested calls.
In this case, according to the SOLID design principles, you'd better rely on abstraction in your code, i.e. not call the service implementation (i.e. the TInterfacedObject instances or even worse directly the low-level classes or functions), but the service abstract interface. You can use the following method of your TSQLRest.Services instance (note that this method is available on both client and server sides as abstract TServiceFactory, so is the right access point to all services):
var CN: IComplexNumber;
beginifnotServiceContext.Request.Server.Services.Resolve(IComplexNumber,CN) then
exit; // IComplexNumber interface not found
CN.Real := 0.01;
CN.Imaginary := 3.1415;
CN.Add(100,200);
assert(SameValue(CN.Real,100.01));
assert(SameValue(CN.Imaginary,203.1415));
end; // here CN will be released
For newer generic-aware versions of Delphi (i.e. Delphi 2010 and up, since Delphi 2009 is buggy about generics), you can use such a method, which enables compile-time checking:
var I: ICalculator;
begin
I := Server.Service<ICalculator>;
if I<>nilthen
result := I.Add(10,20);
end;
You can of course cache/store your TServiceFactory or TSQLRest instances within a local field, if you wish. Using ServiceContext.Request.Server is verbose and error-prone. But you may consider instead to Implement your service from TInjectableObjectRest: the TInjectableObjectRest class has already its buil-in Resolve() overloaded methods, and direct access to the underlying Server: TSQLRestServer instance. So you will be able to write directly both SOA and ORM code:
var I: ICalculator;
beginif Resolve(ICalculator,I) then
Server.Add(TSQLRecordExecution,['Add',I.Add(10,20)]);
end;
If the service has been defined as sicPerThread, the instance you will retrieve on the server side will also be specific to the running thread - in this case, caching the instance may be source of confusion, since there will be one dedicated instance per thread.
16.5. Client side
There is no implementation at all on the client side. This is the magic of mORMot's services: no Wizard to call (as in DataSnap, RemObjects or WCF), nor client-side methods to write - as with our Client-Server services via methods.
You just register the existing interface definition (e.g. our ICalculator type), and you can remotely access to all its methods, executed on the server side.
In fact, a hidden "fake" TInterfaceObject class will be created by the framework (including its internal VTable and low-level assembler code), and used to interact with the remote server. But you do not have to worry about this process: it is transparent to your code.
16.5.1. Set up the Client factory
On the client side, you have to register the corresponding interface to initialize its associated factory, as such:
It is very close to the Server-side registration, despite the fact that we do not provide any implementation class here. Implementation will remain on the server side.
Note that the implementation mode (here sicShared) shall match the one used on the server side. An error will occur if this setting is not coherent.
The other interface we talked about, i.e. IComplexNumber, is registered as such for the client:
This will create the corresponding TServiceFactoryClient instance, ready to serve fake implementation classes to the client process.
To be more precise, this registration step is indeed not mandatory on the client side. If you use the TServiceContainerClient.Info() method, the client-side implementation will auto-register the supplied interface, in sicClientDriven implementation mode.
16.5.2. Using services on the Client side
Once the service is registered on the client side, it is very easy to use it in your code.
You can use the same methods as on the server side to retrieve a TServiceFactory instance.
That is, you may code:
var I: ICalculator;
beginif Client.Services['Calculator'].Get(I)) then
result := I.Add(10,20);
end;
For Delphi 2010 and up, you can use a generic-based method, which enables compile-time checking:
var I: ICalculator;
begin
I := Client.Service<ICalculator>;
if I<>nilthen
result := I.Add(10,20);
end;
For a more complex service, initialized as sicClientDriven:
var CN: IComplexNumber;
beginifnot Client.Services.Resolve(IComplexNumber,CN) then
exit; // IComplexNumber interface not found
CN.Real := 0.01;
CN.Imaginary := 3.1415;
CN.Add(100,200);
assert(SameValue(CN.Real,100.01));
assert(SameValue(CN.Imaginary,203.1415));
end; // here CN will be released on both client AND SERVER sides
The code is just the same as on the server. The only functional change is that the execution will take place on the server side (using the registered TServiceComplexNumber implementation class), and the corresponding class instance will remain active until the CN local interface will be released on the client.
You can of course cache your TServiceFactory instance within a local field, if you wish. On the client side, even if the service has been defined as sicPerThread, you can safely cache and reuse the same instance, since the per-thread process will take place on the server side only.
As we stated in the previous paragraph, since the IComplexNumber is to be executed as sicClientDriven, it is not mandatory to call the Client.ServiceRegister or ServiceDefine method for this interface. In fact, during Client.Services.Info(TypeInfo(IComplexNumber)) method execution, the registration will take place, if it has not been done explicitly before. For code readability, it may be a good idea to explicitly register the interface on the client side also, just to emphasize that this interface is about to be used, and in which mode.
16.6. Sample code
You can find in the "SQLite3/Samples/14 - Interface based services" folder of the supplied source code distribution, a dedicated sample about this feature.
Purpose of this code is to show how to create a client-server service, using interfaces, over named pipe communication.
16.6.1. The shared contract
First, you'll find a common unit, shared by both client and server applications:
Unique purpose of this unit is to define the service interface, and the ROOT_NAME used for the ORM Model (and therefore RESTful URI scheme), and the APPLICATION_NAME used for named-pipe communication.
This ICalculator type is also registered for the internal interface factory system, so that you could use the framework methods directly with ICalculator instead of TypeInfo(ICalculator).
16.6.2. The server sample application
The server is implemented as such:
program Project14Server; {$APPTYPE CONSOLE} uses
SysUtils,
mORMot,
mORMotSQLite3,
Project14Interface; type
TServiceCalculator = class(TInterfacedObject, ICalculator)
publicfunction Add(n1,n2: integer): integer;
end; function TServiceCalculator.Add(n1, n2: integer): integer;
begin
result := n1+n2;
end; var
aModel: TSQLModel;
begin
aModel := TSQLModel.Create([],ROOT_NAME);
trywithTSQLRestServerDB.Create(aModel,ChangeFileExt(paramstr(0),'.db'),true) dotry
CreateMissingTables; // we need AuthGroup and AuthUser tablesServiceDefine(TServiceCalculator,[ICalculator],sicShared);if ExportServerNamedPipe(APPLICATION_NAME) then
writeln('Background server is running.'#10) else
writeln('Error launching the server'#10);
write('Press [Enter] to close the server.');
readln;
finally
Free;
end;
finally
aModel.Free;
end;
end.
It will instantiate a TSQLRestServerDB class, containing a SQLite3 database engine. In fact, since we need authentication, both AuthGroup and AuthUser tables are expected to be available.
Then a call to ServiceDefine() will define the ICalculator contract, and the TServiceCalculator class to be used as its implementation. The sicShared mode is used, since the same implementation class can be shared during all calls (there is no shared nor private data to take care).
Note that since the database expectations of this server are basic (only CRUD commands are needed to handle authentication tables), we may use a TSQLRestServerFullMemory class instead of TSQLRestServerDB. This is what is the purpose of the Project14ServerInMemory.dpr sample:
program Project14ServerInMemory;
(...)
withTSQLRestServerFullMemory.Create(aModel,'test.json',false,true) dotryServiceDefine(TServiceCalculator,[ICalculator],sicShared);if ExportServerNamedPipe(APPLICATION_NAME) then
(...)
Using this class will include the CreateMissingTables call to create both AuthGroup and AuthUser tables needed for authentication. But the resulting executable will be lighter: only 200 KB when compiled with Delphi 7 and our LVCL classes, for a full service provider.
16.6.3. The client sample application
The client is just a simple form with two TEdit fields (edtA and edtB), and a "Call" button, which OnClick event is implemented as:
procedure TForm1.btnCallClick(Sender: TObject);
var a,b: integer;
err: integer;
I: ICalculator;
begin
val(edtA.Text,a,err);
if err<>0 thenbegin
edtA.SetFocus;
exit;
end;
val(edtB.Text,b,err);
if err<>0 thenbegin
edtB.SetFocus;
exit;
end;
if Client=nilthenbeginif Model=nilthenModel := TSQLModel.Create([],ROOT_NAME);Client := TSQLRestClientURINamedPipe.Create(Model,APPLICATION_NAME);Client.SetUser('User','synopse');Client.ServiceDefine([ICalculator],sicShared);end;
if Client.Services['Calculator'].Get(I) thenlblResult.Caption := IntToStr(I.Add(a,b));end; // here local I will be released
The client code is initialized as such:
A TSQLRestClientURINamedPipe instance is created, with an associate TSQLModel and the given APPLICATION_NAME to access the proper server via a named pipe communication;
The connection is authenticated with the default 'User' rights;
The ICalculator interface is defined in the client's internal factory, in sicShared mode (just as in the server).
Once the client is up and ready, the local I: ICalculator variable instance is retrieved, and the remote service is called directly via a simple I.Add(a,b) statement.
You will find in the SQLite3\Samples\16 - Execute SQL via services folder of mORMot source code a Client-Server sample able to access any external database via JSON and HTTP. It is a good demonstration of how to use a non-trivial interface-based service between a client and a server. It will also show how our SynDB.pas classes have a quite abstract design, and are easy to work with, whatever database provider you need to use.
The corresponding service contract has been defined:
To Connect() to any external database, given the parameters as expected by a standard TSQLDBConnectionProperties.Create() constructor call;
Retrieve all table names of this external database as a list;
Execute any SQL statement, returning the content as JSON array, ready to be consumed by AJAX applications (if aExpanded is true), or a Delphi client (e.g. via a TSQLTableJSON and the mORMotUI unit).
Of course, this service will be defined in sicClientDriven mode. That is, the framework will be able to manage a client-driven TSQLDBProperties instance life time.
Benefit of this service is that no database connection is required on the client side: a regular HTTP connection is enough. No need to neither install nor configure any database provider.
Due to mORMot optimized JSON serialization, it will probably be faster to work with such plain HTTP / JSON services, instead of a database connection through a VPN. In fact, database connections are made to work on a local network, and do not like high-latency connections, which are typical on the Internet. On the contrary, the mORMot Client-Server process is optimized for such kind of connection.
Note that the Execute() method returns a RawJSON kind of variable, which is in fact a sub-type of RawUTF8. Its purpose is to transmit the UTF-8 encoded content directly, with no translation to a JSON string, as will be the case with a RawUTF8 variable. In fact, escaping some JSON array within a JSON string is quite verbose. Using RawJSON in this case ensure the best client-side and server-side speed, and also reduce the transmission bandwidth.
The server part is quite easy to follow:
type
TServiceRemoteSQL = class(TInterfacedObject, IRemoteSQL)
protected
fProps: TSQLDBConnectionProperties;
publicdestructor Destroy; override;
public// implements IRemoteSQL methodsprocedure Connect(aEngine: TRemoteSQLEngine; const aServerName, aDatabaseName,
aUserID, aPassWord: RawUTF8);
function GetTableNames: TRawUTF8DynArray;
function Execute(const aSQL: RawUTF8; aExpectResults, aExpanded: Boolean): RawJSON;
end; { TServiceRemoteSQL } procedure TServiceRemoteSQL.Connect(aEngine: TRemoteSQLEngine;
const aServerName, aDatabaseName, aUserID, aPassWord: RawUTF8);
const// rseOleDB, rseODBC, rseOracle, rseSQlite3, rseJet, rseMSSQL
TYPES: array[TRemoteSQLEngine] ofTSQLDBConnectionPropertiesClass = (
TOleDBConnectionProperties, TODBCConnectionProperties,
TSQLDBOracleConnectionProperties, TSQLDBSQLite3ConnectionProperties,
TOleDBJetConnectionProperties, TOleDBMSSQL2008ConnectionProperties);
beginif fProps<>nilthenraise Exception.Create('Connect called more than once');
fProps := TYPES[aEngine].Create(aServerName,aDatabaseName,aUserID,aPassWord);
end; function TServiceRemoteSQL.Execute(const aSQL: RawUTF8; aExpectResults, aExpanded: Boolean): RawJSON;
var res: ISQLDBRows;
beginif fProps=nilthenraise Exception.Create('Connect call required before Execute');
res := fProps.ExecuteInlined(aSQL,aExpectResults);
if res=nilthen
result := '' else
result := res.FetchAllAsJSON(aExpanded);
end; function TServiceRemoteSQL.GetTableNames: TRawUTF8DynArray;
beginif fProps=nilthenraise Exception.Create('Connect call required before GetTableNames');
fProps.GetTableNames(result);
end; destructor TServiceRemoteSQL.Destroy;
begin
FreeAndNil(fProps);
inherited;
end;
Any exception during SynDB.pas process, or raised manually in case of wrong use case will be transmitted to the client, just as expected. The fProps instance life-time is handled by the client, so all we need is to release its pointer in the service implementation destructor.
The services are initialized on the server side with the following code:
var
aModel: TSQLModel;
aServer: TSQLRestServer;
aHTTPServer: TSQLHttpServer;
begin// define the log levelwithTSQLLog.Family dobegin
Level := LOG_VERBOSE;
EchoToConsole := LOG_VERBOSE; // log all events to the consoleend;
// manual switch to console mode
AllocConsole;
TextColor(ccLightGray);
// create a Data Model
aModel := TSQLModel.Create([],ROOT_NAME);
try// initialize a TObjectList-based database engine
aServer := TSQLRestServerFullMemory.Create(aModel,'users.json',false,true);
try// register our IRemoteSQL service on the server sideaServer.ServiceRegister(TServiceRemoteSQL,[TypeInfo(IRemoteSQL)],sicClientDriven).// fProps should better be executed and released in the one main threadSetOptions([],[optExecInMainThread,optFreeInMainThread]);// launch the HTTP server
aHTTPServer := TSQLHttpServer.Create('888',[aServer],'+',useHttpApiRegisteringURI);
try
aHTTPServer.AccessControlAllowOrigin := '*'; // for AJAX requests to work
writeln(#10'Background server is running.'#10);
writeln('Press [Enter] to close the server.'#10);
ConsoleWaitForEnterKey;
finally
aHTTPServer.Free;
end;
finally
aServer.Free;
end;
finally
aModel.Free;
end;
end.
This is a typical mORMot server initialization, published over the HTTP communication protocol (with auto-registration feature, if possible, as stated by the useHttpApiRegisteringURI flag). Since we won't use ORM for any purpose but authentication, a fast TObjectList-based engine (i.e. TSQLRestServerFullMemory) is enough for this sample purpose.
In the above code, you can note that IRemoteSQL service is defined with the optExecInMainThread and optFreeInMainThread options. It means that all methods will be executed in the main process thread. In practice, since SynDB.pas database access may open one connection per thread (e.g. for OleDB / MS SQL or Oracle providers), it may use a lot of memory. Forcing the database execution in the main thread will lower the resource consumption, and still will perform with decent speed (since all the internal marshalling and communication will be multi-threaded in the framework units).
From the client point of view, it will be consumed as such:
procedureTMainForm.FormShow(Sender: TObject);
(...)
fModel := TSQLModel.Create([],ROOT_NAME);
fClient := TSQLHttpClient.Create('localhost','888',fModel);ifnot fClient.ServerTimestampSynchronize thenbeginShowLastClientError(fClient,'Please run Project16ServerHttp.exe');
Close;
exit;
end;
if (not fClient.SetUser('User','synopse')) or(not fClient.ServiceRegisterClientDriven(TypeInfo(IRemoteSQL),fService)) thenbeginShowLastClientError(fClient,'Remote service not available on server');
Close;
exit;
end;
end;
Our IRemoteSQL service will be accessed in sicClientDriven mode, so here we need to initialize RESTful authentication - see below - with a proper call to SetUser().
Note the use of ShowLastClientError() function of mORMotUILogin unit, which is able to use our SynTaskDialog unit to report standard and detailed information about the latest error.
In this sample, no table has been defined within the ORM model. It is not necessary, since all external process will take place at the SQL level. As we need authentication (see the call to fClient.SetUser method), the ORM core will by itself add the TSQLAuthUser and TSQLAuthGroup tables to the model - no need to add them explicitly.
From now on, we have a fService: IRemoteSQL instance available to connect and process any remote SQL request.
procedureTMainForm.btnOpenClick(Sender: TObject);
var TableNames: TRawUTF8DynArray;
(...)
with fSettings dofService.Connect(Engine,ServerName,DatabaseName,UserID,PassWord);TableNames := fService.GetTableNames;
cbbTableNames.Items.Text := UTF8ToString(RawUTF8ArrayToCSV(TableNames,#13#10));
(...)
Now we are connected to the database via the remote service, and we retrieved the table names in a TComboBox.
Then a particular SQL statement can be executed as such:
Here, TSQLTableToGrid.Create(), from the mORMotUI unit, will "inject" the returned data to a standard TDrawGrid, using a TSQLTableJSON instance to un-serialize the returned JSON content.
Note that in case of any exception (connection failure, or server side error, e.g. wrong SQL statement), the ShowExecption() method is used to notify the user with appropriate information.
But it may happen that a client application (or service) needs to know the state of a given service. In a pure stateless implementation, it will have to query the server for any state change, i.e. for any pending notification - this is called polling.
Polling may take place for instance:
When a time consuming work is to be processed on the server side. In this case, the client could not wait for it to be finished, without raising a timeout on the HTTP connection: as a workaround, the client may start the work, then ask for its progress status regularly using a timer and a dedicated method call;
When an unpredictable event is to be notified from the server side. In this case, the client should ask regularly (using a timer, e.g. every second), for any pending event, then react on purpose.
It may therefore sounds preferred, and in some case necessary, to have the ability to let the server notify one or several clients without any prior query, nor having the requirement of a client-side timer:
Polling may be pretty resource consuming on both client and server sides, and add some unwanted latency;
If immediate notification is needed, some kind of "long polling" algorithm may take place, i.e. the server will wait for a long time before returning the notification state if no event did happen: in this case, a dedicated connection is required, in addition to the REST one;
In an event-driven systems, a lot of messages are sent to the clients: a proper publish/subscribe mechanism is preferred, otherwise the complexity of polling methods may increase and become inefficient and unmaintainable;
Explicit push notifications may be necessary, e.g. when a lot of potential events, associated with a complex set of parameters, are likely to be sent by the client.
Our mORMot framework is therefore able to easily implement asynchronous callbacks over WebSockets, defining the callbacks as interface parameters in service method definitions - see Service Methods Parameters.
16.7.1. WebSockets support
By definition, HTTP connections are stateless and one-way, i.e. a client sends a request to the server, which replies back with an answer. There is no way to let the server send a message to the client, without a prior request from the client side.
WebSockets is a communication protocol which is able to upgrade a regular HTTP connection into a dual-way communication wire. After a safe handshake, the underlying TCP/IP socket is able to be accessed directly, via a set of lightweight frames over an application-defined protocol, without the HTTP overhead.
The SynBidirSock.pas unit implements low-level server and client WebSockets communication.
The TWebSocketProtocol class defines an abstract WebSockets protocol, currently implemented as several classes:
TWebSocketProtocolJSON classes hierarchyFor our Client-Server services via interfaces, we will still need to make RESTful requests, so the basic WebSockets framing has been enhanced to support TWebSocketProtocolRest REST-compatible protocols, able to use the single connection for both REST queries and asynchronous notifications. Two classes are available for your SOA applications:
TWebSocketProtocolBinary as a binary proprietary protocol, with optional frame compression and AES encryption (using AES-NI hardware instructions, if available).
In practice, on the server side, you will start your TSQLHttpServer by specifying useBidirSocket as kind of server:
Under the hood, it will instantiate a TWebSocketServer HTTP server, as defined in mORMotHttpServer.pas, based on the sockets API, able to upgrade the HTTP protocol into WebSockets. Our High-performance http.sys server is not yet able to switch to WebSockets - and at API level, it will require at least Windows 8 or Windows 2012 Server.
Then you enable WebSockets for the TWebSocketProtocolBinary protocol, with a symmetric encryption key:
On the client side, you will use a TSQLHttpClientWebsockets instance, as defined in mORMotHttpClient.pas, then explicitly upgrade the connection to use WebSockets (since by default, it will stick to the HTTP protocol):
The expected protocol detail should match the one on the server, i.e. 'encryptionkey' encryption over our binary protocol.
Once upgraded to WebSockets, you may use regular REST commands, as usual:
Client.ServerTimestampSynchronize;
But in addition to regular query/answer commands as defined for Client-Server services via interfaces, you will be able to define callbacks using interface parameters to the service methods.
Under the hood, both client and server will communicate using WebSockets frames, maintaining the connection active using heartbeats (via ping/pong frames), and with clean connection shutdown, from any side. You can use the Settings property of the TWebSocketServerRest instance, as returned by TSQLHttpServer.WebSocketsEnable(), to customize the low-level WebSockets protocol (e.g. timeouts or heartbeats) on the server side. The TSQLHttpClientWebsockets.WebSockets.Settings property will allow the same, on the client side.
We have observed, from our regression tests and internal benchmarking, that using our WebSockets may be faster than regular HTTP, since its frames will be sent as once, whereas HTTP headers and body are not sent in the same TCP packet, and compression will be available for the whole frame, whereas HTTP headers are not compressed. The ability to use strong AES encryption will make this mean of communication even safer than plain HTTP, even with AES encryption over HTTP.
16.7.1.1. Using a "Saga" callback to notify long term end-of-process
An example is better than 100 talks. So let's take a look at the Project31LongWorkServer.dpr and Project31LongWorkClient.dpr samples, from the SQLite3\Samples\31 - WebSockets sub-folder. They will implement a client/server application, in which the client launches a long term process on the server side, then is notified when the process is done, either with success, or failure. Such a pattern is very common in the SOA world, and also known as "saga" - see http://www.rgoarchitects.com/Files/SOAPatterns/Saga.pdf - but in practice, it may be difficult to implement it safely and easily. Let's see how our framework make writing sagas a breeze.
First we define the interfaces to be used, in a shared Project31LongWorkCallbackInterface.pas unit:
The only specific definition is the const onFinish: ILongWorkCallback parameter, supplied to the ILongWorkService.StartWork() method. The client will create a class implementing ILongWorkCallback, then specify it as parameter to this method. On the server side, a "fake" class will implement ILongWorkCallback, then will call back the client using the very same WebSockets connection, when any of its methods will be executed.
As you can see, a single callback interface instance may have several methods, with their own set of parameters (here WorkFinished and WorkFailed), so that the callback may be quite expressive. Any kind of usual parameters will be transmitted, after serialization: string, integer, but even record, dynamic arrays, TSQLRecord or TPersistent values.
When the ILongWorkCallback instance will be released on the client side, the server will be notified, so that any further notification won't create a connection error. We will see later how to handle those events.
16.7.1.2. Client service consumption
The client may be connected to the server as such (see the Project31LongWorkClient.dpr sample source code for the full details, including error handling):
var Client: TSQLHttpClientWebsockets;
workName: string;
Service: ILongWorkService;
callback: ILongWorkCallback;
begin
Client := TSQLHttpClientWebsockets.Create('127.0.0.1','8888',TSQLModel.Create([]));
Client.WebSocketsUpgrade(PROJECT31_TRANSMISSION_KEY);
Client.ServiceDefine([ILongWorkService],sicShared);
Client.Services.Resolve(ILongWorkService,Service);
Then we define our callback, using a dedicated class:
type
TLongWorkCallback = class(TInterfacedCallback,ILongWorkCallback)
protectedprocedure WorkFinished(const workName: string; timeTaken: integer);
procedure WorkFailed(const workName, error: string);
end; procedure TLongWorkCallback.WorkFailed(const workName, error: string);
begin
writeln(#13'Received callback WorkFailed(',workName,') with message "',error,'"');
end; procedure TLongWorkCallback.WorkFinished(const workName: string;
timeTaken: integer);
begin
writeln(#13'Received callback WorkFinished(',workName,') in ',timeTaken,'ms');
end;
Then we specify this kind of callback as parameter to start a long term work:
callback := TLongWorkCallback.Create(Client,ILongWorkCallback);tryrepeat
readln(workName);
if workName='' then
break;
Service.StartWork(workName,callback);until false;
finally
callback := nil; // the server will be notified and release its "fake" class
Service := nil; // release the service local instance BEFORE Client.Freeend;
As you can see, the client is able to start one or several work processes, then expects to be notified of the process ending on its callback interface instance, without explicitly polling the server for its state, since the connection was upgraded to WebSockets via a call to TSQLHttpClientWebsockets.WebSocketsUpgrade().
16.7.1.3. Server side implementation
The server will define the working thread as such (see the Project31LongWorkServer.dpr sample source code for the full details):
type
TLongWorkServiceThread = class(TThread)
protectedfCallback: ILongWorkCallback;
fWorkName: string;
procedure Execute; override;
publicconstructor Create(const workName: string; const callback: ILongWorkCallback);
end; constructor TLongWorkServiceThread.Create(const workName: string;
const callback: ILongWorkCallback);
begininherited Create(false);
fCallback := Callback;
fWorkName := workName;
FreeOnTerminate := true;
end; procedure TLongWorkServiceThread.Execute;
var tix: Int64;
begin
tix := GetTickCount64;
Sleep(5000+Random(1000)); // some hard workif Random(100)>20 thenfCallback.WorkFinished(fWorkName,GetTickCount64-tix) elsefCallback.WorkFailed(fWorkName,'expected random failure');end;
The callback is expected to be supplied as a ILongWorkCallback interface instance, then stored in a fCallback protected field for further notification. Some work is done in the TLongWorkServiceThread.Execute method (here just a Sleep() of more than 5 seconds), and the end-of-work notification is processed, as success or failure (depending on random in this fake process class), on either of the ILongWorkCallback interface methods.
The following class will define, implement and register the ILongWorkService service on the server side:
type
TLongWorkService = class(TInterfacedObject,ILongWorkService)
protected
fTotalWorkCount: Integer;
publicprocedure StartWork(const workName: string; const onFinish: ILongWorkCallback);
function TotalWorkCount: Integer;
end; procedure TLongWorkService.StartWork(const workName: string;
const onFinish: ILongWorkCallback);
beginInterlockedIncrement(fTotalWorkCount);
TLongWorkServiceThread.Create(workName,onFinish);end; function TLongWorkService.TotalWorkCount: Integer;
begin
result := fTotalWorkCount;
end; var HttpServer: TSQLHttpServer;
Server: TSQLRestServerFullMemory;
begin
Server := TSQLRestServerFullMemory.CreateWithOwnModel([]);
Server.ServiceDefine(TLongWorkService,[ILongWorkService],sicShared);
HttpServer := TSQLHttpServer.Create('8888',[Server],'+',useBidirSocket);
HttpServer.WebSocketsEnable(Server,PROJECT31_TRANSMISSION_KEY);
...
Purpose of those methods is just to create and launch the TLongWorkServiceThread process from a client request, then maintain a total count of started works, in a sicShared service instance - see Instances life time implementation - hosted in a useBidirSocket kind of HTTP server.
We have to explicitly call TSQLHttpServer.WebSocketsEnable() so that this server will be able to upgrade to our WebSockets protocol, using our binary framing, and the very same symmetric encryption key as on the client side - shared as a PROJECT31_TRANSMISSION_KEY constant in the sample, but which may be safely stored on both sides.
16.7.2. Publish-subscribe for events
In event-driven architectures, the publish-subscribe messaging pattern is a way of letting senders (called publishers) transmit messages to their receivers (called subscribers), without any prior knowledge of who those subscribers are. In practice, the subscribers will express interest for a set of messages, which will be sent by the publisher to all the subscribers of a given message, as soon as it is be notified.
Publish-Subscribe PatternIn our Client-Server services via interfaces implementation, messages are gathered in interface types, and each message defined as a single method, their content being the methods parameters. Most of the SOA alternative (in Java or C#) do require class definition for messages. Our KISS approach will just use method parameters values as message definition.
To maintain a list of subscribers, the easiest is to store a dynamic array of interface instances, on the publisher side.
16.7.2.1. Defining the interfaces
We will now implement a simple chat service, able to let several clients communicate together, broadcasting any message to all the other connected instances. This sample is also located in the the SQLite3\Samples\31 - WebSockets sub-folder, as Project31ChatServer.dpr and Project31ChatClient.dpr.
So you first define the callback interface, and the service interface:
The main command of the IChatService service is BlaBla(), which should be propagated to all client instance having Joined the conversation, via IChatCallback.NotifyBlabla() events.
Those interface types will be shared by both server and client sides, in the common Project31ChatCallbackInterface.pas unit. The definition is pretty close to what we wrote when Using a "Saga" callback to notify long term end-of-process. For instance, if 3 people did join the chat room, the following process should take place:
Chat Application using Publish-SubscribeThe only additional method is IChatServer.CallbackReleased(), which, by convention, will be called on the server side when any callback interface instance is released on the client side.
As such, the IChatService.Join() method will implement the subscription to the chat service, whereas IChatServer.CallbackReleased() will be called when the client-side callback instance will be released (i.e. when its variable will be assigned to nil), to unsubscribe for the chat service.
16.7.2.2. Writing the Publisher
On the server side, each call to IChatService.Join() will subscribe to an internal list of connections, simply stored as an array of IChatCallback:
Then a remote call to the IChatService.BlaBla() method should be broadcasted to all connected clients, just by calling the IChatCallback.BlaBla() method:
procedure TChatService.BlaBla(const pseudo,msg: string);
var i: integer;
beginfor i := 0 to high(fConnected) dofConnected[i].NotifyBlaBla(pseudo,msg);end;
Note that every call to IChatCallback.BlaBla() within the loop will be made via WebSockets, in an asynchronous and non blocking way, so that even in case of huge number of clients, the IChatService.BlaBla() method won't block. In case of high numbers of messages, the framework is even able to gather push notification messages into a single bigger message, to reduce the resource use - see Real-time synchronization.
If you are a bit paranoid, you may ensure that the notification process will continue, if any of the event failed:
procedure TChatService.BlaBla(const pseudo,msg: string);
var i: integer;
beginfor i := high(fConnected) downto 0 do// downwards for InterfaceArrayDelete()try
fConnected[i].NotifyBlaBla(pseudo,msg);
exceptInterfaceArrayDelete(fConnected,i); // unsubscribe the callback on failureend;
end;
This safer implementation will unregister any failing callback. If the notification raised an exception, it will ensure that this particular invalid subscriber won't be notified any more. Note that since we may reduce the fConnected[] array size on the fly, the loop is processed downwards, to avoid any access violation.
On the server side, the service implementation has been registered as such:
Here, the optExecLockedPerInterface option has been set, so that all method calls will be made thread-safe: concurrent access to the internal fConnected[] list will be protected by a lock. Since a global list of connections is to be maintained, the service life time has been defined as sicShared - see Instances life time implementation.
The following method will be called by the server, when a client callback instance is released (either explicitly, or if the connection is broken), so could be used to unsubscribe to the notification, simply by deleting the callback from the internal fConnected[] array:
The framework will in fact recognize the following method definition in any interface type for a service (it will check the method name, and the method parameters):
When a callback interface parameter (in our case, IChatCallback) will be released on the client side, this method will be called with the corresponding interface instance and type name as parameters. You do not have to call explicitly any method on the client side to unsubscribe a service: assigning nil to a callback variable, or freeing the class instance owning it as a field on the subscriber side, will automatically unregister it on the publisher side.
16.7.2.3. Consuming the service from the Subscriber side
On the client side, you implement the IChatCallback callback interface:
type
TChatCallback = class(TInterfacedCallback,IChatCallback)
protectedprocedure NotifyBlaBla(const pseudo, msg: string);
end; procedure TChatCallback.NotifyBlaBla(const pseudo, msg: string);
begin
writeln(#13'@',pseudo,' ',msg);
end;
The TInterfacedCallback type defines a TInterfacedObject sub-class, which will automatically notify the REST server when it is released. By providing the client TSQLRest instance to the TChatCallback.Create() constructor, you will ensure that the IChatService.CallbackReleased method will be executed on the server side, when the TChatCallback/IChatCallback instance will be released on the client side.
Then you subscribe to your remote service as such:
var Service: IChatService;callback: IChatCallback;
...
Client.ServiceDefine([IChatService],sicShared);
ifnot Client.Services.Resolve(IChatService,Service) thenraiseEServiceException.Create('Service IChatService unavailable');
...
callback := TChatCallback.Create(Client,IChatCallback);Service.Join(pseudo,callback);
...
tryrepeat
readln(msg);
if msg='' then
break;
Service.BlaBla(pseudo,msg);until false;
finallycallback := nil; // will unsubscribe from the remote publisher
Service := nil; // release the service local instance BEFORE Client.Freeend;
You could easily implement more complex publish/subscribe mechanisms, including filtering, time to live or tuned broadcasting, by storing some additional information to the interface instance (e.g. some value to filter, a timestamp). A dynamic array of dedicated records - see TDynArray dynamic array wrapper, or a list of class instances, may be used to store the subscribers expectations.
16.7.2.4. Subscriber multiple redirection
Sometimes, in a complex business system, you will define several uncoupled parts of your code subscribing to the same service events. In a DDD architecture, it will be typically happen when several domain bounded contexts subscribe to a single event source, implemented in the infrastructure layer.
The easiest implementation path is to have each part registering from its side. But it will induce some redundant traffic with the publisher. And it will most probably end-up with duplicated code on subscribers side.
You may try TSQLRest.MultiRedirect and register once to a remote service, then use an internal registration mechanism to have every part of your business logic registering and consuming the events. The method returns an IMultiCallbackRedirect interface, allowing registration of sub-callbacks, with an optional set of method names, if only a sub-set of events are needed.
Note that sub-callbacks do not need to inherit from the TInterfacedCallback type: a regular TInterfacedObject is enough. They will be automatically unregistered from the internal list, if they raise an exception.
16.7.2.5. Proper threaded implementation
A mORMotmulti-threaded server will use critical sections to protect shared data, and avoid potential race conditions. But even on client side, callbacks will be executed in the context of the WebSockets transmission thread. And in a typical micro-services or event-driven architecture, most nodes are clients and servers at the same time, creating a peer-to-peer mesh of services. So you should prevent any race conditions in each and every node, by protecting access to any shared data.
Likewise, if your callback triggers another method which shares the same critical section in another thread, you may encounter deadlock issues. If an event triggers a callback within a critical section used to protect a shared resource, and if this callback runs a blocking REST request, the REST answer will be received in the context of the transmission thread. If this answer tries to access the same shared resource, there will be a conflict with the main critical section lock, so the execution will lock.
To implement proper thread-safety of your callback process you could follow some patterns.
Use several small critical sections, protecting any shared data, with the smallest granularity possible. You may use TSynLocker mutex or TLockedDocVariant schema-less storage.
In your regression tests, ensure you run multi-threaded scenarios, with parallel requests. You may find in TSynParallelProcess an easy way of running concurrent client/server tests. It will help finding out most obvious implementation issues.
By definition, most deadlocks are difficult to reproduce - they are some kind of "Heisenbugs". You may ensure proper logging of the callback process, so that you will be able to track for any deadlock which may occur on production.
A good idea may be to gather all non-blocking callback process in a background thread using TSQLRest.AsynchRedirect. This method will implement any interface via a fake class, which will redirect all methods calls into calls of another interface, but as a FIFO in a background thread. So you will ensure that all callback process will take place in a single thread, avoiding most concurrency issues. As a side effect, the internal FIFO will leverage other threads, so may help scaling your system. For a client application using some User Interface, see below a lock-free alternative.
Multi-threading is the key to performance. But it is also hard to properly implement. By following those simple rules, you may reduce the risk of concurrency issues.
16.7.2.6. Interacting with UI/VCL
As we have stated, all callback notifications do take place in the transmission thread, i.e. in the TWebSocketProcessClientThread instance corresponding to each connected client.
You may be tempted to use the VCL Synchronize() method, as usual, to forward the notifications to the UI layer. Unfortunately, this may trigger some unexpected concurrency issue, e.g. when asynchronous notifications (e.g. TChatCallback.NotifyBlaBla()) are received during a blocking REST command (e.g. Service.BlaBla()). The Synchronize call within the blocking command will avoid any incoming asynchronous notification wait for the main thread to be available, and will block the reception of the answer of the pending REST command... If you experiment random hangouts of your User Interface, and 404 errors corresponding to a low-level WebSockets timeout, even when closing the application, you have certainly hit such a deadlock.
Get rid of all your Synchronize() calls! Use Windows messages instead: they are safe, efficient and fast. The framework allows to forward all incoming notifications as a dedicated Windows message in a single line:
Thanks to these two lines, the callbacks will be executed asynchronously in the main UI thread, using the optimized Message queue of the Operating System, without any blocking execution, nor race condition.
16.7.3. Interface callbacks instead of class messages
If you compare with existing client/server SOA solutions (in Delphi, Java, C# or even in Go or other frameworks), this interface-based callback mechanism sounds pretty unique and easy to work with.
Most Events Oriented solutions do use a set of dedicated messages to propagate the events, with a centralized Message Bus (like MSMQ or JMS), or a P2P approach (see e.g. ZeroMQ or NanoMsg). In practice, you are expected to define one class per message, the class fields being the message values. You will define e.g. one class to notify a successful process, and another class to notify an error. SOA services will eventually tend to be defined by a huge number of individual classes, with the temptation of re-using existing classes in several contexts.
Our interface-based approach allows to gather all messages:
In a single interface type per notification, i.e. probably per service operation;
With one method per event;
Using method parameters defining the event values.
Since asynchronous notifications are needed most of the time, method parameters will be one-way, i.e. only const. Blocking request may also be defined, as we will see below. And an evolved algorithm will transparently gather outgoing messages, to enhance scalability.
Behind the scene, the framework will still transmit raw messages over IP sockets, like other systems, but events notification will benefit from using interfaces, on both server and client sides.
16.7.3.1. Using service and callback interfaces
For instance, you may define the following generic service and callback to retrieve a file from a remote camera, using mORMot's interface-based approach:
type// define some custom types to make the implicit explicit
TCameraID = RawUTF8;
TPictureID = RawUTF8;
// mORMot notifications using a callback interface definition
IMyCameraCallback = interface(IInvokable)
['{445F967F-79C0-4735-A972-0BED6CC63D1D}']
procedure Started(const Camera: TCameraID; const Picture: TPictureID);
procedure Progressed(const Camera: TCameraID; const Picture: TPictureID;
CurrentSize,TotalSize: cardinal);
procedure Finished(const Camera: TCameraID; const Picture: TPictureID;
const PublicURI: RawUTF8; TotalSize: cardinal);
procedure ErrorOccured(const Camera: TCameraID; const Picture: TPictureID;
const MessageText: RawUTF8);
end;
// mORMot main service, also defined as an interface
IMyCameraService = interface(IInvokable)
['{3CE61E74-A01D-41F5-A414-94F204F140E1}']
function TakePicture(const Camera: TCameraID; const Callback: IMyCameraCallback): TPictureID;
end;
Take a deep breath, and keep in mind those two type definitions as reference. In a single look, I guess you did get the expectation of the "Camera Service". We will now compare with a classical message-based pattern.
16.7.3.2. Classical message(s) event
With a class-based message kind of implementation, you will probably define a single class, containing all potential information:
type// a single class message will need a status
TMyCameraCallbackState = (
ccsStarted, ccsProgressed, ccsFinished, ccsErrorOccured);
// the single class message
TMyCameraCallbackMessage = classprivate
fCamera: TCameraID;
fPicture: TPictureID;
fTotalSize: cardinal;
fMessageText: RawUTF8;
fState: TMyCameraCallbackState;
publishedproperty State: TMyCameraCallbackState read fState write fState;
property Camera: TCameraID read fCamera write fCamera;
property Picture: TPictureID read fPicture write fPicture;
property TotalSize: cardinal read fTotalSize write fTotalSize;
property MessageText: RawUTF8read fMessageText write fMessageText;
end;
This single class is easy to write, but makes it a bit confusing to consume the notification. Which field comes with which state? The client-side code will eventually consist of a huge case aMessage.State of ... block, with potential issues. The business logic does not appear in this type definition. Easy to write, difficult to read - and maintain...
In order to have an implementation closer to SOLID design principles, you may define a set of classes, as such:
type// all classes will inherit from this one, to have common properties
TMyCameraCallbackAbstract = classprivate
fCamera: TCameraID;
fPicture: TPictureID;
publishedproperty Camera: TCameraID read fCamera write fCamera;
property Picture: TPictureID read fPicture write fPicture;
end;
// message class when the picture acquisition starts
TMyCameraCallbackStarted = class(TMyCameraCallbackAbstract);
// message class when the picture is acquired
TMyCameraCallbackFinished = class(TMyCameraCallbackAbstract)
private
fPublicURI: RawUTF8;
fTotalSize: cardinal;
publishedproperty TotalSize: cardinal read fTotalSize write fTotalSize;
property PublicURI: RawUTF8read fPublicURI write fPublicURI;
end;
// message during picture download
TMyCameraCallbackProgressed = class(TMyCameraCallbackFinished)
private
fCurrentSize: cardinal;
publishedproperty CurrentSize: cardinal read fCurrentSize write fCurrentSize;
end;
// error message
TMyCameraCallbackErrorOccured = class(TMyCameraCallbackAbstract)
private
fMessageText: RawUTF8;
publishedproperty MessageText: RawUTF8read fMessageText write fMessageText;
end;
Inheritance makes this class hierarchy not as verbose as it may have been with plain "flat" classes, but it is still much less readable than the IMyCameraCallback type definition.
In both cases, such class definitions make it difficult to guess to which message matches which service. You must be very careful and consistent about your naming conventions, and uncouple your service definitions in clear name spaces.
When implementing SOA services, DDD's Ubiquitous Language tends to be polluted by the class definition (getters and setters), and implementation details of the messages-based notification: your Domain code will be tied to the message oriented nature of the Infrastructure layer. We will see below how interface callbacks will help implementing DDD's Event-Driven pattern, in a cleaner way.
16.7.3.3. Workflow adaptation
Sometimes, it may be necessary to react to some unexpected event. The consumer may need to change the workflow of the producer, depending on some business rules, an unexpected error, or end-user interaction.
By design, message-based implementations are asynchronous, and non-blocking: messages are sent and stored in a message broker/bus, and its internal processing loop propages the messages to all subscribers. In such an implementation, there is no natural place for "reverse" feedback messages.
A common pattern is to have a dedicated set of "answer/feedback" messages, to notify the service providers of a state change - it comes with potential race conditions, or unexpected rebound phenomenons, for instance when you add a node to an existing event-driven system.
Another solution may be to define explicit rules for service providers, e.g. when the service is called. You may define a set of workflows, injected to the provider/bus service at runtime. It will definitively tend to break the Single Responsibility Principle, and put logic in the infrastructure layer.
On the other hand, since mORMot's callbacks are true interface methods, they may return some values (as a function result or a var/out parameter). On the server side, such callbacks will block and wait for the client end to respond.
... you will be able to implement any needed complex workflow adaptation, in real time. The server side code will still be very readable and efficient, with no complex plumbing, wait queue or state machine to set up.
16.7.3.4. From interfaces comes abstraction and ease
As an additional benefit, integration with the Delphi language is clearly implementation agnostic: you are not even tied to use the framework, when working with such interface type definitions. In fact, this is a good way of implementing callbacks conforming to SOLID design principles on the server side, and let the mORMot framework publish this mechanism in a client/server way, by using WebSockets, only if necessary.
The very same code could be used on the server side, with no transmission nor marshalling overhead (via direct interface instance calls), and over a network, with optimized use of resource and bandwidth (via "fake" interface calls, and binary/JSON marshalling over TCP/IP).
On the server side, your code - especially your Domain code - may interact directly with the lower level services, defined in the Domain as interface types, and implemented in the infrastructure layer. You may host both Domain and Infrastructure code in a single server executable, with direct assignment of local class instance as callbacks. This will minimize the program resources, in both CPU and memory terms - which is always a very valuable goal, for any business system.
You may be able to reuse your application and business logic in a stand-alone application, with similar direct calls from the UI to the application interface. On need, the interface variable may point to a remote mORMot server, without touching VCL/FMX code.
Last but not least, using an interface will help implementing the whole callback mechanism using Stubs and mocks, e.g. for easy unit testing via Calls tracing. You may also write your unit tests with real local callback class instances, which will be much easier to debug than over the whole client/server stack. Once you identified a scenario which fails the system, you could reproduce it with a dedicated test, even in an aggressive multi-threaded way, then use the debugger to trace the execution and identify the root cause of the issue.
16.8. Implementation details
16.8.1. Error handling
Usually, in Delphi applications (like in most high-level languages), errors are handled via exceptions. By default, any Exception raised on the server side, within an interface-based service method, will be intercepted, and transmitted as an HTTP error to the client side, then a safe but somewhat obfuscated EInterfaceFactoryException will be raised, containing additional information serialized as JSON.
You may wonder why exceptions are not transmitted and raised directly on the client side, as if they were executed locally. In fact, Exceptions are not value objects, but true class instances, with some methods and potentially internal references to other objects. Most of the time, they are tied to a particular execution context, and even some low-level implementation details. A Delphi exception is even something very specific, and will not be easily converted into e.g. a JavaScript, Java or C# exception.
In practice, re-creating and raising an instance of the same Exception class which occurred on the server side will induce a strong dependency of the client code towards the server implementation details. For instance, if the server side raises a ESQLDBOracle exception, translating it on the other end will link your client side with the whole SynDBOracle.pas unit, which certainly not worth it. The ESQLDBOracle exception, by itself, contains a link to an Oracle statement instance, which will be lost when transmitted over the wire. Some client platforms (e.g. mobile or AJAX) do not even have any knowledge of what an Oracle database is... As such, exception are not good candidate on serialization, and transmission per value, from the server side to the client side. We will NOT be in favor of propagating exceptions to the client side.
This is why exceptions should better be intercepted on the server side, with a try .. except block within the service methods, then converted into low level DTO types, specific to the service, then explicitly transmitted as error codes to the client.
The first rule is that raising exception should be exceptional - as its name states: exceptional. I mean, service code should not raise an exception in normal execution, even in case of wrong input. For instance, a wrong input parameter should lead into an application level error, transmitted as an enumeration item and/or some additional (probably text) information, but the business logic should never raise any exception. Only in case of low-level unexpected event (e.g. a SQL level failure, a GPF or Access Violation, a communication error with another trusted internal service), the server side may enter in panic mode, and raise an exception. Remember that exceptions are intercepted by SynLog.pas and can be easily logged by our below: you will be able to identify the execution context, and find a full stack trace of the issue. But most common errors should be handled at business logic level, even defined in each service layers.
In practice, you may use an enumerate, in conjunction with a variant for additional structured information (as a string or a more complex TDocVariant), to transmit an error to the client side. You may define dedicated types at every layer, e.g. with interface types for Domain services, or Application services.
The first cqrsSuccess item of the TCQRSResult enumerate will be the default one (mapped and transmitted to a 0 JSON number), so in case of any stub or mock of the interfaces, fake methods will return as successful, as expected - see Stubs and mocks.
When any exception is raised in a service method, a TCQRSResult enumeration value can be returned as result, so that error will be transmitted directly:
But such exception should be exceptional, as we already stated.
The mORMotDDD.pas unit defines, in the TCQRSQueryObject abstract class, some protected methods to handle errors and exceptions as expected by ICQRSService. For instance, the TCQRSQueryObject.CqrsSetResult() method will set result := cqrsInternalError and serialize the E: Exception within the internal variant used for additional error, ready to be retrieved using ICQRSService.GetLastErrorInfo.
Exceptions are very useful to interrupt a process in case of a catastrophic failure, but they are not the best method for transmitting errors over remote services. Some newer languages (e.g. Google's Go), will even not define any exception type at language or RTL level, but rely on returned values, to transmit the errors in between execution contexts - see https://golang.org/doc/faq#exceptions: in our client-server error handling design, we followed the same idea.
Security is implemented at several levels, following the main security patterns of mORMot - see below:
Process safety, mainly for communication stream - e.g. when using HTTPS protocol at the Client-Server process, or a custom cypher within HTTP content-encoding;
At RESTful / URI authentication level - see below about Session, Group and User notions;
Via authorization at interface or method (service/operation) level to allow or forbid a given operation.
Let us discuss the two last points now (authentication and authorization).
By default, the settings are the following for interface-based services:
All services (i.e. all interfaces) expect one authentication scheme to be validated (at least TSQLRestServerAuthenticationWeak), i.e. a light session to have been initiated by the client - in short, explicit authentication is mandatory;
All operations (i.e. all methods) are allowed to execution - in short, authorization is enabled but opened.
You can chance these settings on the server side (it's an implementation detail - so it does not make any sense to tune it on the client side) via the TServiceFactoryServer instance corresponding to each interface. You can access those instances e.g. from the TSQLRestServer.Services property.
To disable the whole service / interface need of authentication, you can use the ByPassAuthentication property of the TServiceFactoryServer instance corresponding to a given interface. It may be useful e.g. for simple web services which do not expose any sensitive data (e.g. a service catalog, or a service returning public information or even HTML content).
Then, to tune the authorization process at operational (method) level, TServiceFactoryServer provides the following methods to change the security policy for each interface:
AllowAll() and Allow() to enable methods execution globally;
DenyAll() and Deny() to disable methods execution globally;
AllowAllByID() and AllowByID() to enable methods execution by Group IDs;
DenyAllByID() and DenyByID() to disable methods execution by Group IDs;
AllowAllByName() and AllowByName() to enable methods execution by Group names;
DenyAllByName() and DenyByName() to disable methods execution by Group names.
The first four methods will affect everybody. The next *ByID() four methods accept a list of authentication Group IDs (i.e. TSQLAuthGroup.ID values), where as the *ByName() methods will handle TSQLAuthGroup.Ident property values.
In fact, the execution can be authorized for a particular group of authenticated users. Your service can therefore provide some basic features, and then enables advanced features for administrators or supervisors only. Since the User / Group policy is fully customizable in our RESTful authentication scheme - see below, mORMot provides a versatile and inter-operable security pattern.
Here is some extract of the supplied regression tests:
(...)
S := fClient.Server.Services['Calculator'] asTServiceFactoryServer;
Test([1,2,3,4,5],'by default, all methods are allowed');
S.AllowAll;
Test([1,2,3,4,5],'AllowAll should change nothing');
S.DenyAll;
Test([],'DenyAll will reset all settings');
S.AllowAll;
Test([1,2,3,4,5],'back to full acccess for everybody');
S.DenyAllByID([GroupID]);
Test([],'our current user shall be denied');
S.AllowAll;
Test([1,2,3,4,5],'restore allowed for everybody');
S.DenyAllByID([GroupID+1]);
Test([1,2,3,4,5],'this group ID won''t affect the current user');
S.DenyByID(['Add'],GroupID);
Test([2,3,4,5],'exclude a specific method for the current user');
S.DenyByID(['totext'],GroupID);
Test([2,3,5],'exclude another method for the current user');
(...)
In the above regression tests code, the Test() local procedure is used to validate the corresponding methods of ICalculator according to a set of method indexes (1=Add, 2=Multiply, 3=Subtract, 4=ToText...).
In this code, the GroupID value was retrieved as such:
This will allow access to the ICalculator methods only for the Supervisor group of users.
16.8.3. Implementation class types
Most of the time, your implementation class will inherit from TInterfacedObject. As stated above, you could in fact inherit from any plain Delphi class: the only condition is that it implements the expected interface, and has a GUID.
But if you need a special process to take place during the class instance initialization, you can inherit from the TInterfacedObjectWithCustomCreate class, which provides the following virtual constructor, ready to be overridden with your customized initialization:
TInterfacedObjectWithCustomCreate = class(TInterfacedObject)
public/// this virtual constructor will be called at instance creationconstructor Create; virtual;
end;
But from the SOA point of view, it could make sense to use a dedicated method with proper parameters to initialize your instance, e.g. in you are in sicClientDriven execution mode. See in Enhanced sample: remote SQL access some sample code implementing a IRemoteSQL service, with a dedicated Connect() method to be called before all other methods to initialize a sicClientDriven instance.
16.8.4. Server-side execution options (threading)
When a service is registered on the server side, some options can be defined in order to specify its execution details, using the TServiceFactoryServer.SetOptions() method.
By default, service methods are called within the thread which received them. That is, when hosted by multi-threaded server instances (e.g. TSQLite3HttpServer or TSQLRestServerNamedPipeResponse), the method context can be re-entrant - unless it has been defined with sicSingle or sicPerThread instance lifetime modes. It allows better response time and CPU use, but drawback is that the method implementation shall be thread-safe. This is the technical reason why service implementation methods have to handle multi-threading safety carefully, e.g. by using Safe locks for multi-thread applications on purpose.
The following execution options are available:
TServiceMethodOptions
Description
none (default)
All methods are re-entrant and shall be coded to be thread-safe
optExecLockedPerInterface
Each interface will be protected/locked by its own mutex
optExecInMainThread optFreeInMainThread
Methods will be executed in the process main thread Interface will be released in the process main thread
Each interface will execute its methods in its own thread Each interface will be freed in its own thread
Of course, SetOption() accepts an optional list of method names, if you want to tune the execution at the method level.
Setting optExecLockedPerInterface option will lock the specified method(s) execution at the interface level. That is, it won't be possible to have two methods of the same interface be executed concurrently. This option uses a TRTLCriticalSection mutex, so is at the same time safe and using very little resources. But it won't guaranty that the method execution will always take place in the same thread: so if you need some per-thread initialization/finalization (e.g. for COM objects), you should better use the other options.
Setting optExecInMainThread option will force the specified method(s) to be called within a RunningThread.Synchronize() call - it can be used e.g. if your implementation rely heavily on COM objects, or if you want to ensure that your code will work correctly, without the need to worry about thread safety, which can be quite difficult to deal with. The optFreeInMainThread option will also ensure that the service class instance will be released in the main thread (i.e. its Free method called via Synchronize). Since the main thread will be used by all interfaces, it could result into an execution bottleneck.
Setting optExecInPerInterfaceThread option will force the specified method(s) to be called within a thread (to be more precise, a TSynBackgroundThreadSQLRestServerProcedure class, which will notify the TSQLSQLRestServer for the thread context) dedicated to the interface. An associated optFreeInPerInterfaceThread option will also ensure that the service class instance will be released in the same thread: it is pretty convenient to use this threading model, for instance if you want to maintain a dedicated SynDB.pas-based database connection, or initialize some COM objects.
For instance, if you want all the methods of your TServiceCalculator class to be executed in the main thread, you can define:
In fact, the SetOptions() method follows a call signature similar to the one used for defining the service security.
For best performance, you may define your service methods be called without any locking, but rely on some convenient classes defined in SynCommons.pas - as the TAutoLocker class or the TLockedDocVariant kind of storage, for efficient multi-thread process. A similar thread safety concern also applies to MVVM methods - see below.
16.8.5. Audit Trail for Services
We have seen previously how the ORM part of the framework is able to provide an Audit Trail for change tracking. It is a very convenient way of storing the change of state of the data. On the other side, in any modern SOA solution, data is not at the center any more, but services. Sometimes, the data is not stored within your server, but in a third-party Service-Oriented Architecture (SOA). Being able to monitor the service execution of the whole system becomes sooner or later mandatory. Our framework allows to create an Audit Trail of any incoming or outgoing service operation, in a secure, efficient and automated way.
16.8.5.1. When logging is not enough
By default, any interface-based service process will be logged by the framework - see below - in dedicated sllServiceCall and sllServiceReturn log levels. You may see output similar to the following:
18:03:18 Enter mORMot.TSQLRestServerFullMemory(024500A0).URI(POST root/DomUserQuery.SelectByLogonName/1 inlen=7)
18:03:18 Service call mORMot.TSQLRestServerFullMemory(024500A0) DomUserQuery.SelectByLogonName["979"]
18:03:18 Server mORMot.TSQLRestServerFullMemory(024500A0) POST root/DomUserQuery.SelectByLogonName SOA-Interface -> 200 with outlen=21 in 16 us
18:03:18 Service return mORMot.TSQLRestServerFullMemory(024500A0) {"result":[0],"id":1}
18:03:18 Leave 00.000.017
The above lines match the execution of the following method, as defined in dddDomUserCQRS.pas:
IDomUserQuery = interface(ICQRSService)
['{198C01D6-5189-4B74-AAF4-C322237D7D53}']
/// will select a single TUser from its logon name// - then use Get() method to retrieve its contentfunction SelectByLogonName(const aLogonName: RawUTF8): TCQRSResult;
...
This detailed log (including micro-second timing on the "Leave" rows) is very helpful for support, especially to investigate about any error occurring on a production server. But it will not be enough (or on the contrary provide "too much information" which "kills the information") to monitor the higher level of the process, especially on a server with a lot of concurrent activity.
16.8.5.2. Tracing Service Methods
The framework allows to optionally store each SOA method execution in a database, with the input and output parameters, and accurate timing. You could enable this automated process:
This single command will create an Audit Trail with all service calls made on aRestSOAServer to the TSQLRecordServiceLog ORM class of aRestLogServer. Keeping a dedicated REST server for the log entries will reduce the overhead on the main server, and ease its maintenance.
TSQLRecordServiceLog = class(TSQLRecord)
...
published/// the 'interface.method' identifier of this call// - this column will be indexed, for fast SQL queries, with the MicroSec// column (for performance tuning)property Method: RawUTF8read fMethod write fMethod;
/// the input parameters, as a JSON document// - will be stored in JSON_OPTIONS_FAST_EXTENDED format, i.e. with// shortened field names, for smaller TEXT storage// - content may be searched using JsonGet/JsonHas SQL functions on a// SQlite3 storage, or with direct document query under MongoDB/PostgreSQLproperty Input: variantread fInput write fInput;
/// the output parameters, as a JSON document, including result: for a function// - will be stored in JSON_OPTIONS_FAST_EXTENDED format, i.e. with// shortened field names, for smaller TEXT storage// - content may be searched using JsonGet/JsonHas SQL functions on a// SQlite3 storage, or with direct document query under MongoDB/PostgreSQLproperty Output: variantread fOutput write fOutput;
/// the Session ID, if there is anyproperty Session: integer read fSession write fSession;
/// the User ID, if there is an identified Sessionproperty User: integer read fUser write fUser;
/// will be filled by the ORM when this record is written in the databaseproperty Time: TModTimeread fTime write fTime;
/// execution time of this method, in micro secondsproperty MicroSec: integer read fMicroSec write fMicroSec;
end;
The ORM will therefore store the following table on its database:
ServiceLog Record LayoutAs you can see, all input and output parameters are part of the record, as two TDocVariant instances. Since they are stored as JSON/TEXT, you could perform some requests directly on their content, especially if actual storage take place in a MongoDB database: you may even use dedicated indexes on the parameter values, and/or run advanced map/reduce queries. You can use optNoLogInput or optNoLogOutput settings with TInterfaceFactory.SetOptions() to hide all input or output parameters values, or define some value types as containing Sensitive Personal Information (SPI), using TInterfaceFactory.RegisterUnsafeSPIType.
Since very accurate timing, with a micro-second resolution, is part of the information, you will be able to make filtering or advanced statistics using simple SQL clauses. It has never been easier to monitor your SOA system, and identify potential issues. You may easily extract this information from your database, and feed a real-time visual monitoring chart system, for instance. Or identify and spy unusual execution patterns (e.g. unexpected timing or redounding error codes), which will match some SQL requests: those SQL statements may be run automatically on a regular basis, to prevent any problem before it actually happen.
16.8.5.3. Tracing Asynchronous External Calls
Sometimes, your server may be the client of another process. In an SOA environment, you may interface with a third-party REST service for an external process, e.g. sending a real-time notification.
This single command will create an Audit Trail with all notification calls sent to aNotificationClientService, in the TSQLRecordServiceNotifications ORM class of aServicesLogRest.
ServiceNotifications Record LayoutThe additional Sent property will contain the TTimeLog time-stamp on which the notification will have taken place.
In fact, all methods executed via this notification service will now be first stored in this table, then the remote HTTP notifications will take place asynchronously in the background. Transmission will be in order (first-in-first-out), and in case of any connection problem (e.g. the remote server not returning a 200 HTTP SUCCESS status code), it won't move to the next entry, and will retry after the NotificationsRetrySeconds period, as supplied to the SendNotifications() method.
Of course, you may define your own sub-class, to customize the destination Audit Trail table:
Thanks to those TSQLRecordServiceLog classes, high-level support and analysis has never become easier. The actual implementation of those features has been tuned to minimize the impact on main performance, by using e.g. delayed write operations via BATCH sequences for adding/updating/deleting records, or a dedicated background thread for the asynchronous notification process.
16.8.6. Transmission content
All data is transmitted as JSON arrays or objects, according to the requested URI.
We'll discuss how data is expected to be transmitted, at the application level.
16.8.6.1. Request format
As stated above, there are several available modes of routing, defined by a given class, inheriting from TSQLRestServerURIContext:
Routing via TSQLRestServerURIContext classes hierarchyThe corresponding description may be:
RESTful authentication for each method or for the whole service (interface)
RESTful authentication for the whole service (interface)
Speed
10% faster
10% slower
Most of the time, the input parameters will be transmitted as a JSON array of values, following the exact order of const / var method parameters. As an alternative, a JSON object storing the input parameters by name will be accepted. This will be slightly slower than a JSON array of parameters, but could be handy, depending on the client side. Last but not least, TSQLRestRoutingREST is able to decode parameters encoded at URI level, as most regular historic HTTP requests.
The routing to be used is defined globally in the TSQLRest.ServiceRouting property, and should match on both client and server side, of course. By design, you should never assign the abstract TSQLRestServerURIContext to this property.
The TSQLRestServerURIContext abstract class defines the following methods, which will be overridden by inherited implementations to reflect the expected behavior on all aspects of the RESTful routing and transmission:
TSQLRestServerURIContext = classprotected
...
/// retrieve RESTful URI routingfunction URIDecodeREST: boolean; virtual;
/// retrieve method-based SOA URI routing with optional RESTful modeprocedure URIDecodeSOAByMethod; virtual;
/// retrieve interface-based SOAprocedure URIDecodeSOAByInterface; virtual; abstract;
/// process authenticationfunction Authenticate: boolean; virtual;
/// direct launch of a method-based serviceprocedure ExecuteSOAByMethod; virtual;
/// direct launch of an interface-based serviceprocedure ExecuteSOAByInterface; virtual; abstract;
/// handle GET/LOCK/UNLOCK/STATE verbs for ORM/CRUD processprocedure ExecuteORMGet; virtual;
/// handle POST/PUT/DELETE/BEGIN/END/ABORT verbs for ORM/CRUD processprocedure ExecuteORMWrite; virtual;
...
16.8.6.1.1.1. Parameters transmitted as JSON array
In the default TSQLRestRoutingREST mode, both service and operation (i.e. interface and method) are identified within the URI. And the message body is a standard JSON array of the supplied parameters (i.e. all const and var parameters).
Here we use a POST verb, but the framework will also allows other methods like GET, if needed (e.g. from a regular browser). The pure Delphi client implementation will use only POST.
For a sicClientDriven mode service, the needed instance ID is appended to the URI:
POST /root/ComplexNumber.Add/1234
(...)
[20,30]
Here, 1234 is the identifier of the server-side instance ID, which is used to track the instance life-time, in sicClientDriven mode. One benefit of transmitting the Client Session ID within the URI is that it will be more secure in our RESTful authentication scheme - see below: each method (and even any client driven session ID) will be signed properly.
16.8.6.1.1.2. Parameters transmitted as JSON object
The mORMot server will also accept the incoming parameters to be encoded as a JSON object of named values, instead of a JSON array:
POST /root/Calculator.Add
(...)
{"n1":1,"n2":2}
Of course, order of the values is not mandatory in a JSON object, since parameters will be lookup by name. As a result, the following request will be the same as the previous one:
POST /root/Calculator.Add
(...)
{"n2":2,"n1":1}
For a sicClientDriven mode service, the needed instance ID is appended to the URI:
POST /root/ComplexNumber.Add/1234
(...)
{"aReal":20,"aImaginary":30}
In some cases, naming the parameters could be useful, on the client side. But this should not be the default, since it will be slightly slower (for parsing and checking the names), and use more bandwidth at transmission.
Any missing parameter in the incoming JSON object will be replaced by its default value. For instance, the following will run IComplexNumber.Add(0,2):
POST /root/Calculator.Add
(...)
{"n2":2}
Any unknown parameter in the incoming JSON object will just be ignored. It could be handy, if you want to transmit some generic execution context (e.g. a global "data scope" in a MVC model), and let the service use only the values it needs.
POST /root/ComplexNumber.Add/1234
(...)
{"Session":"1234","aImaginary":30,"aReal":20,"UserLogged":"Nikita"}
Of course, the extra values will consume some bandwidth for nothing, but the process cost on the server side will be negligible, since our implementation will just ignore those unexpected properties, without allocating any memory for them.
16.8.6.1.1.3. Parameters encoded at URI level
In this TSQLRestRoutingREST mode, the server is also able to retrieve the parameters from the URI, if the message body is left void. This is not used from a Delphi client (since it will be more complex and therefore slower), but it can be used for a client, if needed:
POST root/Calculator.Add?+%5B+1%2C2+%5D
GET root/Calculator.Add?+%5B+1%2C2+%5D
In the above line, +%5B+1%2C2+%5D will be decoded as [1,2] on the server side. In conjunction with the use of a GET verb, it may be more suitable for a remote AJAX connection.
As an alternative, you can encode and name the parameters at URI level, in a regular HTML fashion:
GET root/Calculator.Add?n1=1&n2=2
Since parameters are named, they can be in any order. And if any parameter is missing, it will be replaced by its default value (e.g. 0 for a number or '' for a string).
This may be pretty convenient for simple services, consumed from any kind of client.
Note that there is a known size limitation when passing some data with the URI over HTTP. Official RFC 2616 standard advices to limit the URI size to 255 characters, whereas in practice, it sounds safe to transmit up to 2048 characters within the URI. If you want to get rid of this limitation, just use the default transmission of a JSON array as request body.
As an alternative, the URI can be written as /RootName/InterfaceName/MethodName. It may be more RESTful-compliant, depending on your client policies. The following URIs will therefore be equivalent to the previous requests:
POST /root/Calculator/Add
POST /root/ComplexNumber/Add/1234
POST root/Calculator/Add?+%5B+1%2C2+%5D
GET root/Calculator/Add?+%5B+1%2C2+%5D
GET root/Calculator/Add?n1=1&n2=2
From a Delphi client, the /RootName/InterfaceName.MethodName scheme will always be used.
16.8.6.1.1.4. Sending a JSON object
By default, the mORMot client will send all values, transmitted as a JSON array without any parameter name, as we have seen:
This may help transmitting some values to a non-mORMot server, in another format, for a given service.
16.8.6.1.1.5. Sending raw binary
If your purpose is to upload some binary data, RawByteString and TSQLRawBlob input parameters will by default be transmitted as Base64 encoded JSON text.
You may define Client-Server services via methods to transmit raw binary, without the Base64 encoding overhead. It would allow low-access to the input content type and encoding, even with multi-part file upload from HTTP.
As an alternative, if you use default TSQLRestRoutingREST routing, and defined a single RawByteString or TSQLRawBlob input parameter, it will be processed as a raw POST with binary body defined with mime-type 'application/octet-stream'. This may be more optimized for remote access over the Internet.
16.8.6.1.2. JSON-RPC
16.8.6.1.2.1. Parameters transmitted as JSON array
If TSQLRestRoutingJSON_RPC mode is used, the URI will define the interface, and then the method name will be inlined with parameters, e.g.
POST /root/Calculator
(...)
{"method":"Add","params":[1,2],"id":0}
Here, the "id" field can be not set (and even not existing), since it has no purpose in sicShared mode.
For a sicClientDriven mode service:
POST /root/ComplexNumber
(...)
{"method":"Add","params":[20,30],"id":1234}
16.8.6.1.2.2. Parameters transmitted as JSON object
As an alternative, you may let the values be transmitted as a JSON object containing the named parameters values, instead of a JSON array:
POST /root/Calculator
(...)
{"method":"Add","params":{"n1":1,"n2":2},"id":0}
Any missing parameter will be replaced by its default value;
Properties order is not sensitive anymore;
Unexpected parameters will just be ignored.
Note that by definition, TSQLRestRoutingJSON_RPC mode is not able to handle URI-encoded parameters. In fact, the JSON-RPC mode expects the URI to be used only for identifying the service, and have the whole execution context transmitted as body.
16.8.6.1.3. REST mode or JSON-RPC mode?
For a standard mORMot Delphi client, or any supported Cross-Platform client - see below - TSQLRestRoutingREST is preferred. The supplied libraries, even for Smart Mobile Studio, fully implement this routing scheme. It is the faster, safer and most modular mode available. In practice, TSQLRestRoutingJSON_RPC mode has been found to be a little bit slower. Since the method name will be part of the URI, the signature will have a bigger extent than in JSON-RPC mode, so it will be more secure. Its ability to retrieve URI-encoded parameters could be also useful, e.g. to server some dynamic HTML pages in addition to the SOA endpoints, with proper HTTP caching abilities.
Of course, TSQLRestRoutingJSON_RPC mode may be used as an alternative, depending on the client expectations, and technology limitations, e.g. if your client expect a JSON-RPC compatible communication. It's up to you to select the right routing scheme to be used, depending on your needs.
16.8.6.2. Response format
16.8.6.2.1. Standard answer as JSON object
16.8.6.2.1.1. JSON answers
16.8.6.2.1.1.1. Returning as JSON array
The framework will always return the data in the same format, whatever the routing mode used.
Basically, this is a JSON object, with one nested "result": property, and the client driven "id": value (e.g. always 0 in sicShared mode):
POST /root/Calculator.Add
(...)
[1,2]
will be answered as such:
{"result":[3]}
For a sicClientDriven, sicPerSession, sicPerUser, sicPerGroup or sicPerThread mode service, the answer will contain an additional "id":... member, which will identify the corresponding session:
{"result":[3],"id":1234}
In sicSingle and sicShared modes, the "id":0 member is just not emitted.
The result JSON array contains all var and out parameters values (in their declaration order), and then the method main result.
If you want to transmit some binary blob content, consider using a RawByteString kind of parameter, which will transmit a Base64-encoded JSON text on the wire.
The framework is able to handle class instances as parameters, for instance with the following interface, using a TPersistent child class with published properties (it will be the same for TSQLRecordORM instances):
typeTComplexNumber = class(TPersistent)
private
fReal: Double;
fImaginary: Double;
publicconstructor Create(aReal, aImaginary: double); reintroduce;
publishedproperty Real: Double read fReal write fReal;
property Imaginary: Double read fImaginary write fImaginary;
end; IComplexCalculator = interface(ICalculator)
['{8D0F3839-056B-4488-A616-986CF8D4DEB7}']
/// purpose of this unique method is to substract two complex numbers// - using class instances as parametersprocedure Substract(n1,n2: TComplexNumber; out Result: TComplexNumber);
end;
As stated above, it is not possible to return a class as a result of a function (who will be responsible of handling its life-time?). So in this method declaration, the result is declared as out parameter.
During the transmission, published properties of TComplexNumber parameters will be serialized as standard JSON objects within the "result":[...] JSON array:
POST root/ComplexCalculator.Substract
(...)
[{"Real":2,"Imaginary":3},{"Real":20,"Imaginary":30}]
will be answered as such:
{"result":[{"Real":-18,"Imaginary":-27}]}
16.8.6.2.1.1.2. Returning a JSON object
Note that if TServiceFactoryServer.ResultAsJSONObject is set to true, the outgoing values won't be emitted within a "result":[...] JSON array, but via a "result":{... } JSON object, with the var/out parameter names as object fields, and "Result": for a function result:
GET root/Calculator/Add?n1=1&n2=2
...
{"Result":3}
All those JSON array or object contents fulfill perfectly standard JSON declarations, so can be generated and consumed directly by any AJAX client. The TServiceFactoryServer. ResultAsJSONObject option make it even easier to consume mORMot services, since all outgoing values will be named in the "result": JSON object.
16.8.6.2.1.2. Returning raw JSON content
By default, if you want to transmit a JSON content with interface-based services, using a RawUTF8 will convert it to a JSON string. Therefore, any JSON special characters (like " or \ or [) will be escaped. This will slow down the process on both server and client side, and increase transmission bandwidth.
For instance, if you define such a method:
function TServiceRemoteSQL.Execute(const aSQL: RawUTF8; aExpectResults, aExpanded: Boolean): RawUTF8;
var res: ISQLDBRows;
beginif fProps=nilthenraise Exception.Create('Connect call required before Execute');
res := fProps.ExecuteInlined(aSQL,aExpectResults);
if res=nilthen
result := '' elseresult := res.FetchAllAsJSON(aExpanded);end;
The FetchAllAsJSON() method will return a JSON array content, but will be escaped as a JSON string when transmitted via a RawUTF8 variable.
A dedicated RawJSON type has been defined, and will specify to the mORMot core that the UTF-8 text is a valid JSON content, and should not be escaped.
That is, defining the method as followed will increase process speed and reduce used bandwidth:
function TServiceRemoteSQL.Execute(const aSQL: RawUTF8; aExpectResults, aExpanded: Boolean): RawJSON;
See sample "16 - Execute SQL via services" for some working code using this feature.
As a consequence, using RawJSON will also make the transmitted content much more AJAX friendly, since the returned value will be a valid JSON array or object, and not a JSON string which will need JavaScript "unstringification".
16.8.6.2.1.3. Returning errors
In case of an error, the standard message object will be returned:
TSQLRestRoutingJSON_RPC call with invalid method name (in this mode, there is no specific message, since a JSON answer may be a valid request)
Parameters required
The server expect at least a void JSON array (aka []) as parameters
Unauthorized method
This method is not allowed with the current authenticated user group - see Security above
Not allowed to publish signature
The client requested the interface signature, but this has not been allowed on the server policy (see TServiceContainerServer. PublishSignature)
... instance id:? not found or deprecated
The supplied "id": parameter points to a wrong instance (in sicPerSession / sicPerUser / sicPerGroup mode)
ExceptionClass: Exception Message (with 500 Internal Server Error)
An exception was raised during method execution
On the client side, you may encounter the following EInterfaceFactoryException messages, starting with the generic 'Invalid fake IInterfaceName.MethodName interface call' text:
ErrorText
Description
unexpected self
self does exist as low-level implementation detail, but is not intended to be transmitted
JSON array/object result expected
content returned from the Server was neither a JSON array nor a JSON object
unexpected parameter "...."
the Server returned a JSON object with an unknown or invalid member name
returned object record variant array RawJSON
a returned class, record, variant, dynamic array of RawJSON value was not properly serialized
missing or invalid value
a returned string or numerical value is not valid JSON content
16.8.6.2.2. Returning content as XML
By default, interface-based services of a mORMot server will always return a JSON array. But you may (or a JSON object, if TServiceFactoryServer.ResultAsJSONObject or ResultAsJSONObjectWithoutResult is true).
With some kind of clients (e.g. if they are made by a third party), it could be useful to return XML content instead.
Your mORMot server is able to let its interface-based services return XML context instead, or in addition to the default JSON format.
GET root/Calculator/Add?n1=1&n2=2
...
<?xml version="1.0" encoding="UTF-8"?>
<result><Result>3</Result></result>
Conversion is processed from the JSON content generated by the mORMot kernel, via a call to JSONBufferToXML() function, which performs the XML generation without almost no memory allocation. So only a slightly performance penalty may be noticed (much faster in practice than most node-based XML producers available).
One drawback of using this TServiceFactoryServer.ResultAsXMLObject property is that your regular Delphi or AJAX clients won't be able to consume the service any more, since they expect JSON content. If you want your service to be consumed either by XML and JSON, you will need two services. You can therefore define a dedicated interface to return XML, and then register this interface to return only XML:
typeICalculator = interface(IInvokable)
['{9A60C8ED-CEB2-4E09-87D4-4A16F496E5FE}']
/// add two signed 32-bit integersfunction Add(n1,n2: integer): integer;
end;
ICalculatorXML = interface(ICalculator)['{0D682D65-CE0F-441B-B4EC-2AC75E357EFE}']end; // no additional method, just new name and GUID TServiceCalculator = class(TInterfacedObject, ICalculator,ICalculatorXML)public// implementation class should implement both interfacesfunction Add(n1,n2: integer): integer;
end;
...
aServer.ServiceRegister(TServiceCalculator,[TypeInfo(ICalculator)],sicShared);
aServer.ServiceRegister(TServiceCalculator,[TypeInfo(ICalculatorXML)],sicShared).ResultAsXMLObject := True;
...
There will therefore be two running service instances (e.g. here two instances of TServiceCalculator, one for ICalculator and one for ICalculatorXML). It could be an issue, in some cases.
And such a dedicated interface may need more testing and code on the server side, since they will be accessible from two URIs:
GET root/Calculator/Add?n1=1&n2=2
...
{"result":{"Result":3}}
and for ICalculatorXML interface:
GET root/CalculatorXML/Add?n1=1&n2=2
...
<?xml version="1.0" encoding="UTF-8"?>
<result><Result>3</Result></result>
16.8.6.2.2.2. Return XML content on demand
As an alternative, you can let the mORMot server inspect the incoming HTTP headers, and return the content as XML if the "Accept: " header is exactly "application/xml" or "text/xml".
For standard requests, the incoming HTTP header will be either void, either "Accept: */*", so will return JSON content. But if the client set either "Accept: application/xml" or "Accept: text/xml" in its header, then it will return an XML document.
Instead of this JSON content:
GET root/Calculator/Add?n1=1&n2=2
Accept: */*
...
{"result":{"Result":3}}
The following XML will be returned:
GET root/Calculator/Add?n1=1&n2=2
Accept: application/xml
...
<?xml version="1.0" encoding="UTF-8"?>
<result><Result>3</Result></result>
as it will with "text/xml":
GET root/Calculator/Add?n1=1&n2=2
Accept: text/xml
...
<?xml version="1.0" encoding="UTF-8"?>
<result><Result>3</Result></result>
Note that the header is expected to be "Accept: application/xml" or "Accept: text/xml" as exact value. For instance "Accept: text/html,application/xml,*/*" won't be detected by the server, and will return regular JSON:
GET root/Calculator/Add?n1=1&n2=2
Accept: text/html,application/xml,*/*
...
{"result":{"Result":3}}
Your XML client should therefore be able to force the exact content of the HTTP "Accept:" header.
Together with parameter values optionally encoded at URI level - available with TSQLRestRoutingREST default routing scheme (see ?n1=1&n2=2 above)- it could be an useful alternative to consume mORMot services from any XML-based client.
16.8.6.2.3. Custom returned content
Note that even if the response format is a JSON object by default, and expected as such by our TServiceContainerClient implementation, there is a way of returning any content from a remote request.
It may be used by AJAX or HTML applications to return any kind of data, i.e. not only JSON results, but pure text, HTML or even binary content. Our TServiceFactoryClient instance is also able to handle such requests, and will save client-server bandwidth when transmitting some BLOB data (since it won't serialized the content with Base64 encoding).
In order to specify a custom format, you can use the following TServiceCustomAnswer record type as the result of an interface function:
The Header field shall be not null (i.e. not equal to ''), and contains the expected content type header (e.g. TEXT_CONTENT_TYPE_HEADER or HTML_CONTENT_TYPE_HEADER). Then the Content value will be transmitted back directly to the client, with no JSON serialization. Of course, no var nor out parameter will be transmitted (since there is no JSON result array any more). Finally, the Status field could be overridden with a property HTML code, if the default HTTP_SUCCESS is not enough for your purpose. Note that when consumed from Delphi clients, HTTP_SUCCESS is expected to be returned by the server: you should customize Status field only for plain AJAX / web clients.
In order to implement such method, you may define such an interface:
Note that since there is only one BLOB content returned, no var nor out parameters are allowed to be defined for this method. If this is the case, an exception will be raised during the interface registration step. But you can define any const parameter needed, to specify your request.
You may also be able to use this feature to implement custom UTF-8 HTML creation, setting the Header value to HTML_CONTENT_TYPE_HEADER constant, and using our fast below for the rendering. Remember that in TSQLRestRoutingJSON mode, you can encode any simple parameter value at URI level, to transmit your browsing context.
16.9. Comparison with WCF
Microsoft Windows Communication Foundation is the unified programming model provided by Microsoft for building service-oriented applications. See http://msdn.microsoft.com/en-us/library/dd456779
Here is a short reference table of WCF / mORMotSOA features and implementation of the RESTful pattern.
Feature
WCF
mORMot
Internal design
SOAP with REST integration
RESTful
Hosting
exe/service/ISS/WAS
in-process/exe/service
Scalability/balancing
up to WAS
by dedicated hosting
MetaData
WSDL+XML
JSON contract
Service contract
interface
interface
Data contract
class
class/record
ORM integration
separated
integrated in the model
URI definition
attribute-driven
REST/JSON-RPC convention-driven, or class-driven
Service contract
interface + attributes
interface + shared Model
Versioning
XML name-space
interface signature
Message protocol
SOAP/custom
RESTful
Messaging
single/duplex
stateless (like HTTP)
Sequence
attributes on methods
interface life time
Transactional
fully transactional
on implementation side
Instance life time
per call, per session, single
per call, per session, per user, per group, per thread, single, client-driven
Configuration
.config file or code
convention over configuration optionally tuned by code
Client acccess
Layer source should be generated
No layer, but direct registration
End points
One end-point per contract
Unique or shared end-point
Operation
synchronous/asynchronous
synchronous (REST)
Session
available (optional)
available (optional)
Encryption
at Service level
at communication level
Compression
at Service level
at communication level
Serialization
XML/binary/JSON
JSON/XML/custom
Communication protocol
HTTP/HTTPS/TCP/pipe/MSMQ
HTTP/HTTPS/TCP/pipe/messages/in-process
HTTP/HTTPS server
http.sys
http.sys/native (winsock)
Authentication
Windows or custom
Windows, ORM-based, or class-driven
Authorization
by attribute or config files
per user group, or class-driven
Threading
by attributes
at service/method level
Weight
middle (GC, JIT, .dll)
low
Speed
good
high
Extensibility
verbose but complete
customizable
Standard
de facto
KISS design (e.g. JSON, HTTP)
Source code
closed
published
License
proprietary
Open
Price
depends
Free
Support
official + community
Synopse + community
Runtime required
.Net framework (+ISS/WAS)
none (blank OS)
About instance life time, note that in WCF InstanceContextMode.Single is in fact the same as sicShared within mORMot context: only one instance is used for all incoming calls and is not recycled subsequent to the calls. Therefore, sicSingle mode (which is mORMot's default) maps InstanceContextMode.PerCall in WCF, meaning that one instance is used per call.
We may be tempted to say that mORMot SOA architecture is almost complete, even for a young and Open Source project. Some features (like per user, per group or client-driven instance life time, or Windows Messages local communication) are even unique to mORMot. In fact, sicClientDriven is pretty convenient when implementing a Service Oriented Architecture.
Of course, WCF features its SOAP-based architecture. But WCF also suffers from it: due to this ground-up message design, it will always endure its SOAP overweight, which is "Simple" only by name, not by reputation.
If you need to communicate with an external service provider, you can easily create a SOAP gateway from Delphi, as such:
Import the WSDL (Web Service Definition Language) definition of a web service and turn it into a Delphi import unit;
Publish the interface as a mORMot server-side implementation class.
Since SOAP features a lot of requirements, and expects some plumping according to its format (especially when services are provided from C# or Java), we choose to not re-invent the wheel this time, and rely on existing Delphi libraries (available within the Delphi IDE) for this purpose. If you need a cross-platform SOAP 1.1 compatible solution, or if you version of Delphi does not include SOAP process, you may take a look at http://wiki.freepascal.org/Web_Service_Toolkit which is a web services package for FPC, Lazarus and Delphi.
But for service communication within the mORMot application domain, the RESTful / JSON approach gives much better performance and ease of use. You do not have to play with WSDL or unit wrappers, just share some interface definition between clients and servers. Once you have used the ServiceRegister() or ServiceDefine() methods of mORMot, you will find out how the WCF plumbing is over-sized and over-complicated: imagine that WCF allows only one end-point per interface/contract - in a SOLID design principles world, where interface segregation should reign, it is not the easier way to go!
Optionally, mORMot's interface based services allow to publish their result as XML, and encode the incoming parameters at URI level. It makes it a good alternative to SOAP, in the XML world.
At this time, the only missing feature of mORMot's SOA is transactional process, which must be handled on server side, within the service implementation (e.g. with explicit commit or rollback).
17. Cross-Platform clients
Adopt a mORMotCurrent version of the main framework units target only Win32 / Win64 systems under Delphi, and (in a preliminary state) Windows or Linux under FPC. It allows to make easy self-hosting of mORMot servers for local business applications in any corporation, or pay cheap hosting in the Cloud, since mORMot CPU and RAM expectations are much lower than a regular IIS-WCF-MSSQL-.Net stack. But in a Service-Oriented Architecture (SOA), you will probably need to create clients for platforms outside the support platform sets world, especially mobile devices or AJAX applications.
A set of cross-platform client units is therefore available in the CrossPlatform sub-folder of the source code repository. It allows writing any client in modern object pascal language, for:
Any version of Delphi, on any platform (Mac OSX, or any mobile supported devices);
FreePascal Compiler (in 2.6.4, 2.7.1 or 3.x branches - preferred is 3.2 fixes);
Smart Mobile Studio (2.1 and up), to create AJAX or mobile applications (via PhoneGap, if needed).
Complex record types are also exported and consumed via JSON, on all platforms (for both ORM and SOA methods);
Integrated debugging methods, used by both ORM and SOA process, able to log into a local file or to a remote server - see below;
Some cross-platform low-level functions and types definitions, to help share as much code as possible between your projects.
In the future, C# or Java clients may be written. The CrossPlatform sub-folder code could be used as reference, to write minimal and efficient clients on any platform. Our REST model is pretty straightforward and standard, and use of JSON tends to leverage a lot of potential marshalling issues which may occur with XML or binary formats.
In practice, a code generator embedded in the mORMot server can be used to create the client wrappers, using the below included on the server side. With a click, you can generate and download a client source file for any supported platform. A set of .mustache templates is available, and can be customized or extended to support any new platform: any help is welcome, especially for targeting Java or C# clients.
17.1. Available client platforms
17.1.1. Delphi FMX / FreePascal FCL cross-platform support
Latest versions of Delphi include the FireMonkey (FMX) framework, able to deliver multi-device, true native applications for Windows, Mac OSX, Android and iOS (iPhone/iPad). Our SynCrossPlatform* units are able to easily create clients for those platforms.
Similarly, these units can be compiled with FreePascal, so that any mORMot server could be consumed from the numerous supported platforms of this compiler.
In order to use those units, ensure in your IDE that the CrossPlatform sub-folder of the mORMot source code repository is defined in your Library Search Path.
17.1.1.1. Cross-platform JSON
We developed our own cross-platform JSON process unit in SynCrossPlatformJSON.pas, shared with Delphi and FreePascal. In fact, it appears to be easier to use (since it is variant-based and with late-binding abilities) and run much faster than the official DBXJSON.pas unit shipped with latest versions of Delphi, as stated by the "25 - JSON performance" sample:
Our TSQLTableJSON class is more than 10 times faster than standard DBXJSON unit, when processing a list of results as returned by a mORMot server. The latest value on each line above is the memory consumption. It should be of high interest on mobile platforms, where memory allocation tends to be much slower and sensitive than on Windows (where the FastMM4 memory manager does wonders). Our unit consumes 5 times less memory than the RTL's version.
The "Synopse ORM" lines stand for the TSQLTableJSON class as implemented in mORMot.pas. It uses our optimized UTF-8 functions and classes, in-place escaping together with our RawUTF8 custom string type as implemented in SynCommons.pas, so that it is 3 times faster than our cross-platform units, and 40 times than DBXJSON, using much less memory. Some tricks used by Synopse ORM rely on pointers and are not compatible with the NextGen compiler or the official Delphi road-map, so the Synopse crossplatform uses diverse algorithm, but offers still pretty good performance.
This unit features a TJSONVariantData custom variant type, similar to TDocVariant custom variant type, available in the main mORMot framework. It allows writing such nice and readable code, with late-binding:
var doc: variant;
json,json2: string;
...
doc := JSONVariant('{"test":1234,"name":"Joh\\"n\\r","zero":0.0}');
assert(doc.test=1234);
assert(doc.name='Joh"n'#13);
assert(doc.name2=null);
assert(doc.zero=0);
json := doc; // conversion to JSON text when assigned to a string variable
assert(json='{"test":1234,"name":"Joh\\"n\\r","zero":0}');
doc.name2 := 3.1415926;
doc.name := 'John';
json := doc;
assert(json='{"test":1234,"name":"John","zero":0,"name2":3.1415926}');
The unit is also able to serialize any TPersistent class, i.e. all published properties could be written or read from a JSON object representation. It also handles nested objects, stored as TCollection. See for instance in the SynCrossPlatformTests unit:
Of course, this serialization feature is used for the TSQLRecord ORM class.
Due to lack of RTTI, record serialization is supported via some functions generated by the server with the code wrappers.
17.1.1.2. Delphi OSX and NextGen
In order to be compliant with the NextGen revision, our SynCrossPlatform* units follow the expectations of this new family of cross-compilers, which targets Android and iOS. In particular, we rely only on the string type for text process and storage, even at JSON level, and we tried to make object allocation ARC-compatible. Some types have been defined, e.g. THttpBody, TUTF8Buffer or AnsiChar, to ensure that our units will compile on all supported platforms.
Feedback is needed for the mobile targets, via FMX. In fact, we rely for our own projects on Smart Mobile Studio for our mobile applications, so the Synopse team did not test Delphi NextGen platforms (i.e. iOS and Android) as deep as other systems. Your input will be very valuable and welcome, here!
17.1.1.3. FreePascal clients
SynCrossPlatform* units support the FreePascal Compiler, in its 2.7.1 / 3.x branches. Most of the code is shared with Delphi, including RTTI support and all supported types.
Some restrictions apply, though.
Due to a bug in FreePascal implementation of variant late binding, the following code won't work as expected on older revisions of FPC:
In fact, the way late-binding properties are implemented in the FreePascal in some fully compatible with Delphi expectations. The FreePascal maintainers did some initial fix (the variant instance is now passed by reference), so above code seems to work on current FPC trunk.
As a result, direct access to TJSONVariantData instances, and not a variant variable, may be both safer and faster when using FPC.
In the Lazarus IDE, we also observed that the debugger is not able to handle our custom variant type. If you look at any TJSONVariantData instance with the debugger, an error message "unsupported variant type" will appear. As far as we found out, this is a Lazarus limitation. Delphi, on its side, is able to display any custom variant type in its debugger, after conversion to string, i.e. its JSON representation.
Another issue with the 2.7.1 / 3.1.1 revisions is how the new string type is implemented. In fact, if you use a string variable containing an UTF-8 encoded text, then the following line will reset the result code page to the system code page:
function StringToJSON(const Text: string): string;
...
result := '"'+copy(Text,1,j-1); // here FPC 2.7.1 erases UTF-8 encoding
...
It sounds like if '"' will force the code page of result to be not an UTF-8 content. With Delphi, this kind of statements work as expected, even for AnsiString values, and '"' constant is handled as RawByteString. We were not able to find an easy and safe workaround for FPC yet. Input is welcome in this area, from any expert!
You have to take care of this limitation, if you target the Windows operating system with FPC (and Lazarus). Under other systems, the default code page is likely to be UTF-8, so in this case our SynCrossPlatform* units will work as expected.
We found out the FreePascal compiler to work very well, and result in small and fast executables. For most common work, timing is comparable with Delphi. The memory manager is less optimized than FastMM4 for rough simple threaded tests, but is cross-platform and designed to be more efficient in multi-thread mode: in fact, it has no giant lock, as FastMM4 suffers.
17.1.1.4. Local or remote logging
You can use the TSQLRest.Log() overloaded methods to log any content into a file or a remote server.
All ORM and SOA functions of the TSQLRest instance will create the expected log, just with the main mORMot units running on Win32/Win64 - see below. For instance, here are some log entries created during the RegressionTest.dpr process:
16:47:15 Trace POST root/People status=201 state=847 in=92 out=0
16:47:15 DB People.ID=200 created from {"FirstName":"First200","LastName":"Last200","YearOfBirth":2000,"YearOfDeath":2025,"Sexe":0}
16:47:15 SQL select RowID,FirstName,LastName,YearOfBirth,YearOfDeath,Sexe from People
16:47:15 Trace GET root?sql=select+RowID%2CFirstName%2CLastName%2CYearOfBirth%2CYearOfDeath%2CSexe+from+People status=200 state=847 in=0 out=21078
16:47:15 SQL select RowID,YearOfBirth,YearOfDeath from People
16:47:15 Trace GET root?sql=select+RowID%2CYearOfBirth%2CYearOfDeath+from+People status=200 state=847 in=0 out=10694
16:47:15 SQL select RowID,FirstName,LastName,YearOfBirth,YearOfDeath,Sexe from People where yearofbirth=:(1900):
16:47:15 Trace GET root?sql=select+RowID%2CFirstName%2CLastName%2CYearOfBirth%2CYearOfDeath%2CSexe+from+People+where+yearofbirth%3D%3A%281900%29%3A status=200 state=847 in=0 out=107
16:47:15 Trace DELETE root/People/16 status=200 state=848 in=0 out=0
16:47:15 DB Delete People.ID=16
Then, our Log View tool is able to run as a remote log server, and display the incoming events in real-time - see below. Having such logs available will be pretty convenient, especially when debugging applications on a mobile device, or a remote computer.
17.1.2. Smart Mobile Studio support
Smart Mobile Studio - see http://www.smartmobilestudio.com - is a complete RAD environment for writing cutting edge HTML5 mobile applications. It ships with a fully fledged compiler capable of compiling Object Pascal (in a modern dialect call SmartPascal) into highly optimized and raw JavaScript.
There are several solutions able to compile to JavaScript. In fact, we can find several families of compilers:
JavaScript super-sets, adding optional strong typing, and classes, close to the ECMAScript Sixth Edition: the current main language in this category is certainly TypeScript, designed by Anders Hejlsberg (father of both the Delphi language and C#), and published by Microsoft;
New languages, dedicated to make writing JavaScript programs easier, with an alternative syntax and new concepts (like classes, lambdas, scoping, splats, comprehensions...): most relevant languages of this family are CoffeeScript and Dart;
High-level languages, like Google Web Toolkit (compiling Java code), JSIL (from C# via Mono), or Smart Mobile Studio (from object pascal);
Low-level languages, like Emscripten (compiling C/C++ from LLVM byte-code, using asm.js).
Of course, from our point of view, use of modern object pascal is of great interest, since it will leverage our own coding skills, and make us able to share code between client and server sides.
17.1.2.1. Beyond JavaScript
The Smart Pascal language brings strong typing, true OOP to JavaScript, including classes, partial classes, interfaces, inheritance, polymorphism, virtual and abstract classes and methods, helpers, closures, lambdas, enumerations and sets, getter/setter expressions, operator overloading, contract programming. But you can still unleash the power of JavaScript (some may say "the good parts"), if needed: the variant type is used to allow dynamic typing, and you can write some JavaScript code as an asm .. end block. See http://en.wikipedia.org/wiki/The_Smart_Pascal_programming_language
The resulting HTML5 project is self-sufficient with no external JavaScript library, and is compiled as a single index.html file (including its css, if needed). The JavaScript code generated by the compiler (written in Delphi by Eric Grange), is of very high quality, optimized for best execution performance (either in JIT or V8), has low memory consumption, and can be compressed and/or obfuscated.
The SmartCL runtime library encapsulate HTML5 APIs in a set of pure pascal classes and functions, and an IDE with an integrated form designer is available. You can debug your application directly within the IDE (since revision 2.1 - even if it is not yet always stable) or within your browser (IE, Chrome or FireBug have great debuggers), with step-by-step execution of the object pascal code (if you define "Add source map (for debugging)" in Project Options / Linker).
Using a third-party tool like PhoneGap - see http://phonegap.com - you will be able to supply your customers with true native iOS or Android applications, running without any network, and accessing the full power of any modern Smart Phone. Resulting applications will be much smaller in size than the one generated by Delphi FMX (a simple Smart RESTful client with a login form and ORM + SOA tests is zipped as 40 KB), and will work seamlessly on all HTML5 platforms, including most mobile (like Windows Phone, Blackberry, Firefox OS, or webOS) or desktop (Windows, Linux, BSD, MacOS) architectures.
Smart Mobile Studio is therefore a great platform for implementing rich client-side AJAX or Mobile applications, to work with our client-server mORMot framework.
17.1.2.2. Using Smart Mobile Studio with mORMot
There is no package to be installed within the Smart Mobile Studio IDE. The client units will be generated directly from the mORMot server. Any edition of Smart - see http://smartmobilestudio.com/feature-matrix - is enough: you do not need to pay for the Enterprise edition to consume mORMot services. But of course, the Professionnal edition is recommended, since the Basic edition does not allow to create forms from the IDE, which is the main point for an AJAX application.
In contrast to the wrappers available in the Professional edition of Smart, for accessing RemObjects or DataSnap servers, our mORMot clients are 100% written in the SmartPascal dialect. There is no need to link an external .js library to your executable, and you will benefit of the obfuscation and smart linking features of the Smart compiler.
The only requirement is to copy the mORMot cross-platform units to your Smart Mobile Studio installation. This can be done in three copy instructions:
xcopy SynCrossPlatformSpecific.pas "c:\ProgramData\Optimale Systemer AS\Smart Mobile Studio\Libraries" /Y
xcopy SynCrossPlatformCrypto.pas "c:\ProgramData\Optimale Systemer AS\Smart Mobile Studio\Libraries" /Y
xcopy SynCrossPlatformREST.pas "c:\ProgramData\Optimale Systemer AS\Smart Mobile Studio\Libraries" /Y
You can find a corresponding BATCH file in the CrossPlatform folder, and in SQLite3\Samples\29 - SmartMobileStudio Client\CopySynCrossPlatformUnits.bat.
In fact, the SynCrossPlatformJSON.pas unit is not used under Smart Mobile Studio: we use the built-in JSON serialization features of JavaScript, using variant dynamic type, and the standard JSON.Stringify() and JSON.Parse() functions.
17.1.3. Remote logging
Since there is no true file system API available under a HTML5 sand-boxed application, logging to a local file is not an option. Even when packaged with PhoneGap, local log files are not convenient to work with.
Generated logs will have the same methods and format as with Delphi or FreePascal - see Local or remote logging. TSQLRest.Log(E: Exception) method will also log the stack trace of the exception! Our LogView tool - see below - is able to run as a simple but efficient remote log server and viewer, shared with regular or cross-platform units of the framework.
A dedicated asynchronous implementation has been refined for Smart Mobile Studio clients, so that several events will be gathered and sent at once to the remote server, to maximize bandwidth use and let the application be still responsive. It allows even complex mobile applications to be debugged with ease, on any device, even over WiFi or 3G/4G networks. Your support could ask your customer to enable logging for a particular case, then see in real time what is wrong with your application.
17.2. Generating client wrappers
Even if it is feasible to write the client code by hand, your mORMot server is able to create the source code needed for client access, via a dedicated method-based service, and a set of Mustache-based templates - see below.
The following templates are available in the CrossPlatform\templates folder:
Unit Name
Compiler Target
CrossPlatform.pas.mustache
Delphi / FPC SynCrossPlatform* units
Delphi.pas.mustache
Delphi Win32/Win64 mORMot units
SmartMobileStudio.pas.mustache
Smart Mobile Studio 2.1
In the future, other wrappers may be added. And you can write your own, which could be included within the framework source! Your input is warmly welcome, especially if you want to write a template for Java or C# client. The generated data context already contains the data types corresponding to those compilers: e.g. a mORMot's RawUTF8 field or parameter could be identified as "typeCS":"string" or "typeJava":"String" in addition to "typeDelphi":"RawUTF8" and "typePascal":"string".
17.2.1. Publishing the code generator
By default, and for security reasons, the code generation is not embedded to your mORMot RESTful server. In fact, the mORMotWrapper.pas unit will link both mORMot.pas and SynMustache.pas units, and use Mustache templates to generate code for a given TSQLRestServer instance.
We will start from the interface-based service Sample code as defined in the "SQLite3\Samples\14 - Interface based services" folder. After some minor modifications, we copied the server source code into "SQLite3\Samples\27 - CrossPlatform Clients\Project14ServerHttpWrapper.dpr":
program Project14ServerHttpWrapper; {$APPTYPE CONSOLE} uses
SysUtils,
Classes,
SynCommons,
mORMot,
mORMotHttpServer,
mORMotWrappers,
Project14Interface in '..\14 - Interface based services\Project14Interface.pas'; type
TServiceCalculator = class(TInterfacedObject, ICalculator)
publicfunction Add(n1,n2: integer): integer;
end; function TServiceCalculator.Add(n1, n2: integer): integer;
begin
result := n1+n2;
end; var
aModel: TSQLModel;
aServer: TSQLRestServer;
aHTTPServer: TSQLHttpServer;
begin// create a Data Model
aModel := TSQLModel.Create([],ROOT_NAME);
try// initialize a TObjectList-based database engine
aServer := TSQLRestServerFullMemory.Create(aModel,'test.json',false,true);
try// add the http://localhost:888/root/wrapper code generation web pageAddToServerWrapperMethod(aServer,['..\..\..\CrossPlatform\templates','..\..\..\..\CrossPlatform\templates']);// register our ICalculator service on the server side
aServer.ServiceDefine(TServiceCalculator,[ICalculator],sicShared);
// launch the HTTP server
aHTTPServer := TSQLHttpServer.Create(PORT_NAME,[aServer],'+',useHttpApiRegisteringURI);
try
aHTTPServer.AccessControlAllowOrigin := '*'; // for AJAX requests to work
writeln(#10'Background server is running.');
writeln('You can test http://localhost:',PORT_NAME,'/wrapper');
writeln(#10'Press [Enter] to close the server.'#10);
readln;
finally
aHTTPServer.Free;
end;
finally
aServer.Free;
end;
finally
aModel.Free;
end;
end.
As you can see, we just added a reference to the mORMotWrappers unit, and a call to AddToServerWrapperMethod() in order to publish the available code generators.
Now, if you run the Project14ServerHttpWrapper server, and point your favorite browser to http://localhost:888/root/wrapper you will see the following page:
Client Wrappers
Available Templates:
* CrossPlatform mORMotClient.pas - download as file - see as text - see template
* Delphi mORMotClient.pas - download as file - see as text - see template
* SmartMobileStudio mORMotClient.pas - download as file - see as text - see template
You can also retrieve the corresponding template context.
Each of the *.mustache template available in the specified folder is listed here. Links above will allow downloading a client source code unit, or displaying it as text in the browser. The template can also be displayed un-rendered, for reference. As true Mustache templates, the source code files are generated from a data context, which can be displayed, as JSON, from the "template context" link. It may help you when debugging your own templates. Note that if you modify and save a .mustache template file, just re-load the "see as text" browser page and your modification is taken in account immediately (you do not need to restart the server).
Generated source code will follow the template name, and here will always be downloaded as mORMotClient.pas. Of course, you can change the unit name for your end-user application. It could be even mandatory if the same client will access to several mORMot servers at once, which could be the case in a Service-Oriented Architecture (SOA) project.
Just ensure that you will never change the mORMotClient.pas generated content by hand. If necessary, you can create and customize your own Mustache template, to be used for your exact purpose. By design, such automated code generation will require to re-create the client unit each time the server ORM or SOA structure is modified. In fact, as stated in the mORMotClient.pas comment, any manual modification of this file may be lost after regeneration. You have been warned!
For publishing the wrappers for a REST / ORM oriented program, take a look at the '28 - Simple RESTful ORM Server' sample.
If you feel that the current templates have some issues or need some enhancements, you are very welcome to send us your change requests on our forums. Once you are used at it, Mustache templates are fairly easy to work with. Similarly, if you find out that some information is missing in the generated data context, e.g. for a new platform or language, we will be pleased to enhance the official mORMotWrapper.pas process.
17.2.2. Delphi / FreePascal client samples
The "27 - CrossPlatform ClientsRegressionTests.dpr" sample creates a mORMot server with its own ORM data model, containing a TSQLRecordPeople class, and a set of interface-based SOA services, some including complex types like a record.
Then this sample uses a generated mORMotClient.pas, retrieved from the "download as file" link of the CrossPlatform template above. Its set of regression tests (written using a small cross-platform TSynTest unit test class) will then perform remote ORM and SOA calls to the PeopleServer embedded instance, over all supported authentication schemes - see below:
Cross Platform Units for mORMot
---------------------------------
1. Running "Iso8601DateTime"
30003 tests passed in 00:00:018
2. Running "Base64Encoding"
304 tests passed in 00:00:000
3. Running "JSON"
18628 tests passed in 00:00:056
4. Running "Model"
1013 tests passed in 00:00:003
5. Running "Cryptography"
4 tests passed in 00:00:000
Tests failed: 0 / 49952
Time elapsed: 00:00:080
Cross Platform Client for mORMot without authentication
---------------------------------------------------------
1. Running "Connection"
2 tests passed in 00:00:010
2. Running "ORM"
4549 tests passed in 00:00:160
3. Running "ORMBatch"
4564 tests passed in 00:00:097
4. Running "Services"
26253 tests passed in 00:00:302
5. Running "CleanUp"
1 tests passed in 00:00:000
Tests failed: 0 / 35369
Time elapsed: 00:00:574
Cross Platform Client for mORMot using TSQLRestServerAuthenticationNone
-------------------------------------------------------------------------
...
Cross Platform Client for mORMot using TSQLRestServerAuthenticationDefault
----------------------------------------------------------------------------
...
The generated mORMotClient.pas unit is used for all "Cross Platform Client" tests above, covering both ORM and SOA features of the framework.
17.2.2.1. Connection to the server
You could manually connect to a mORMot server as such:
var Model: TSQLModel;
Client: TSQLRestClientHTTP;
...
Model := TSQLModel.Create([TSQLAuthUser,TSQLAuthGroup,TSQLRecordPeople]);
Client := TSQLRestClientHTTP.Create('localhost',SERVER_PORT,Model);
ifnot Client.Connect thenraise Exception.Create('Impossible to connect to the server');
if Client.ServerTimestamp=0 thenraise Exception.Create('Incorrect server');
ifnot Client.SetUser(TSQLRestAuthenticationDefault,'User','synopse') thenraise Exception.Create('Impossible to authenticate to the server');
...
Or you may use the GetClient() function generated in mORMotClient.pas:
/// create a TSQLRestClientHTTP instance and connect to the server// - it will use by default port 888// - secure connection will be established via TSQLRestServerAuthenticationDefault// with the supplied credentials - on connection or authentication error,// this function will raise a corresponding exceptionfunction GetClient(const aServerAddress, aUserName,aPassword: string;
aServerPort: integer=SERVER_PORT): TSQLRestClientHTTP;
Which could be used as such:
var Client: TSQLRestClientHTTP;
...
Client := GetClient('localhost','User','synopse')
The data model and the expected authentication scheme were included in the GetClient() function, which will raise the expected ERestException in case of any connection or authentication issue.
17.2.2.2. CRUD/ORM remote access
Thanks to SynCrossPlatform* units, you could easily perform any remote ORM operation on your mORMot server, with the usual TSQLRest CRUD methods. For instance, the RegressionTests.dpr sample performs the following operations
fClient.CallBackGet('DropTable',[],Call,TSQLRecordPeople); // call of method-based service
check(Call.OutStatus=HTTP_SUCCESS);
people := TSQLRecordPeople.Create; // create a record ORMtryfor i := 1 to 200 dobegin
people.FirstName := 'First'+IntToStr(i);
people.LastName := 'Last'+IntToStr(i);
people.YearOfBirth := i+1800;
people.YearOfDeath := i+1825;
people.Sexe := TPeopleSexe(i and 1);
check(Client.Add(people,true)=i); // add one recordend;
finally
people.Free;
end;
...
people := TSQLRecordPeople.CreateAndFillPrepare(fClient,'','yearofbirth=?',[1900]); // parameterized query returning one or several rowstry
n := 0;
while people.FillOne dobegin
inc(n);
check(people.ID=100);
check(people.FirstName='First100');
check(people.LastName='Last100');
check(people.YearOfBirth=1900);
check(people.YearOfDeath=1925);
end;
check(n=1); // we expected only one record herefinally
people.Free;
end;
for i := 1 to 200 doif i and 15=0 thenfClient.Delete(TSQLRecordPeople,i) else// record deletionif i mod 82=0 thenbegin
people := TSQLRecordPeople.Create;
try
id := i+1;
people.ID := i;
people.YearOfBirth := id+1800;
people.YearOfDeath := id+1825;
check(fClient.Update(people,'YEarOFBIRTH,YEarOfDeath')); // record modificationfinally
people.Free;
end;
end;
for i := 1 to 200 dobeginpeople := TSQLRecordPeople.Create(fClient,i); // retrieve one instance from IDtryif i and 15=0 then// was deleted
Check(people.ID=0) elsebeginif i mod 82=0 then
id := i+1 else// was modified
id := i;
Check(people.ID=i);
Check(people.FirstName='First'+IntToStr(i));
Check(people.LastName='Last'+IntToStr(i));
Check(people.YearOfBirth=id+1800);
Check(people.YearOfDeath=id+1825);
Check(ord(people.Sexe)=i and 1);
end;
finally
people.Free;
end;
end;
As we already stated, BATCH mode is also supported, with the classic mORMot syntax:
...
res: TIntegerDynArray;
...
fClient.BatchStart(TSQLRecordPeople);
people := TSQLRecordPeople.Create;
tryfor i := 1 to 200 dobegin
people.FirstName := 'First'+IntToStr(i);
people.LastName := 'Last'+IntToStr(i);
people.YearOfBirth := i+1800;
people.YearOfDeath := i+1825;
fClient.BatchAdd(people,true);end;
finally
people.Free;
end;
fClient.fBatchSend(res)=HTTP_SUCCESS);
check(length(res)=200);
for i := 1 to 200 do
check(res[i-1]=i); // server returned the IDs of the newly created records
Those BatchAdd / BatchDelete / BatchUpdate methods of TSQLRest have the benefit to introduce at client level:
Much higher performance, especially on multi-insertion or multi-update of data;
Transactional support: TSQLRest.BatchStart() has an optional AutomaticTransactionPerRow parameter, set to 10000 by default, which will create a server-side transaction during the write process, enable Array binding or Optimized SQL for bulk insert on the server side if available, and an ACID rollback in case of any failure.
You can note that all above code has exactly the same structure and methods than standard mORMot clients.
The generated mORMotClient.pas unit contains all needed TSQLRecord types, and its used properties, including enumerations or complex records. The only dependency of this unit are SynCrossPlatform* units, so will be perfectly cross-platform (whereas our main SynCommons.pas and mORMot.pas units do target only Win32 and Win64).
As a result, you are able to share server and client code between a Windows project and any supported platform, even AJAX (see "Smart Mobile Studio client samples" below). A shared unique code base will eventually reduce both implementation and debugging time, which is essential to unleash your business code potential and maximize your ROI.
17.2.2.3. Service consumption
The ultimate goal of the mORMot framework is to publish your business via a Service-Oriented Architecture (SOA). As a consequence, those services should be made available from any kind of device or platform, even outside the Windows world. The server is able to generate client wrappers code, which could be used to consume any Client-Server services via interfaces using any supported authentication scheme - see below.
Here is an extract of the mORMotClient.pas unit as generated for the RegressionTests.dpr sample:
type/// service implemented by TServiceCalculator// - you can access this service as such:// !var aCalculator: ICalculator;// !begin// ! aCalculator := TCalculator.Create(aClient);// ! // now you can use aCalculator methods// !...ICalculator = interface(IServiceAbstract)
['{9A60C8ED-CEB2-4E09-87D4-4A16F496E5FE}']
function Add(const n1: integer; const n2: integer): integer;
procedureToText(const Value: currency; const Curr: string; var Sexe: TPeopleSexe; var Name: string);
function RecordToText(var Rec: TTestCustomJSONArraySimpleArray): string;
end; /// implements ICalculator from http://localhost:888/root/Calculator// - this service will run in sicShared mode
TServiceCalculator = class(TServiceClientAbstract,ICalculator)
publicconstructor Create(aClient: TSQLRestClientURI); override;
function Add(const n1: integer; const n2: integer): integer;
procedureToText(const Value: currency; const Curr: string; var Sexe: TPeopleSexe; var Name: string);
function RecordToText(var Rec: TTestCustomJSONArraySimpleArray): string;
end;
As you can see, a dedicated class has been generated to consume the server-side ICalculator interface-based service, in its own ICalculator client-side type. It is able to handle complex types, like enumerations (e.g. TPeopleSexe) and records (e.g. TTestCustomJSONArraySimpleArray), which are also defined in the very same mORMotClient.pas unit. You can note that the RawUTF8 type has been changed into the standard Delphi / FreePascal string type, since it is the native type used by our SynCrossPlatformJSON.pas unit for all its JSON marshalling. Of course, under latest version of Delphi and FreePascal, this kind of content may be Unicode encoded (either as UTF-16 for the string = UnicodeStringDelphi type, or as UTF-8 for the FreePascal / Lazarusstring type).
The supplied regression tests show how to use remotely those services:
var calc: ICalculator;
i,j: integer;
sex: TPeopleSexe;
name: string;
...
calc := TServiceCalculator.Create(fClient);
check(calc.InstanceImplementation=sicShared);
check(calc.ServiceName='Calculator');
for i := 1 to 200 docheck(calc.Add(i,i+1)=i*2+1);for i := 1 to 200 dobegin
sex := TPeopleSexe(i and 1);
name := 'Smith';
calc.ToText(i,'$',sex,name);
check(sex=sFemale);
check(name=format('$ %d for %s Smith',[i,SEX_TEXT[i and 1]]));
end;
...
As with regular mORMot client code, a TServiceCalculator instance is created and is assigned to a ICalculator local variable. As such, no try ... finally Calc.Free end block is mandatory here, to avoid any memory leak: the compiler will create such an hidden block for the Calc: ICalculator variable scope.
The service-side contract of the ICalculator signature is retrieved and checked within TServiceCalculator.Create, and will raise an ERestException if it does not match the contract identified in mORMotClient.pas.
The cross-platform clients are able to manage the service instance life-time, especially the sicPerClient mode. In this case, an implementation class instance will be created on the server for each client, until the corresponding interface instance will released (i.e. out of scope or assigned to nil), which will release the server-side instance - just like with a regular mORMot client code.
Note that all process here is executed synchronously, i.e. in blocking mode. It is up to you to ensure that your application is able to still be responsive, even if the server does a lot of process, so may be late to answer. A dedicated thread may help in this case.
17.2.3. Smart Mobile Studio client samples
In addition to Delphi and FreePascal clients, our framework is able to access any mORMot server from HTML5 / AJAX rich client, thanks to Smart Mobile Studio.
17.2.3.1. Adding two numbers in AJAX
You can find in SQLite3\Samples\27 - CrossPlatform Clients\SmartMobileStudio a simple client for the TServiceCalculator.Add() interface-based service. If your Project14ServerHttpWrapper server is running, you can just point to the supplied www\index.html file in the sub-folder. You will then see a web page with a "Server Connect" button, and if you click on it, you will be able to add two numbers. This a full HTML5 web application, connecting securely to your mORMot server, which will work from any desktop browser (on Windows, Mac OS X, or Linux), or from any mobile device (either iPhone / iPad / Android / Windows 8 Mobile).
In order to create the application, we just clicked on "download as file" in the SmartMobileStudio link in the web page, and copied the generated file in the source folder of a new Smart Mobile project. Of course, we did copy the needed SynCrossPlatform*.pas units from the mORMot source code tree into the Smart library folder, as stated above. Just ensure you run CopySynCrossPlatformUnits.bat from the CrossPlatform folder at least once from the latest revision of the framework source code.
Then, on the form visual editor, we added a BtnConnect button, then a PanelCompute panel with two edit fields named EditA and EditB, and two other buttons, named BtnComputeAsynch and BtnComputeSynch. A LabelResult label will be used to display the computation result. The BtnConnect is a toggle which will show or display the PanelCompute panel, which is hidden by default, depending on the connection status.
Smart Mobile Studio Calculator SampleIn the Form1.pas unit source code side, we added a reference to our both SynCrossPlatformREST and mORMotClient units, and some events to the buttons:
The BtnConnect event will connect asynchronously to the server, using 'User' as log-on name, and 'synopse' as password (those as the framework defaults). We just use the GetClient() function, as published in our generated mORMotClient.pas unit:
/// create a TSQLRestClientHTTP instance and connect to the server// - it will use by default port 888// - secure connection will be established via TSQLRestServerAuthenticationDefault// with the supplied credentials// - request will be asynchronous, and trigger onSuccess or onError eventprocedure GetClient(const aServerAddress, aUserName,aPassword: string;
onSuccess, onError: TSQLRestEvent; aServerPort: integer=SERVER_PORT);
It uses two callbacks, the first in case of success, and the second triggered on failure. On success, we will set the global Client variable with the TSQLRestClientURI instance just created, then display the two fields and compute buttons:
The GetClient() function expects two callbacks, respectively onSuccess and onError, which are implemented here with two SmartPascallambda blocks.
Now that we are connected to the server, let's do some useful computation! As you can see in the mORMotClient.pas generated unit, our interface-based service can be accessed via a SmartPascalTServiceCalculator class (and not an interface), with two variations of each methods: one asynchronous method - e.g. TServiceCalculator.Add() - expecting success/error callbacks, and one synchronous (blocking) method - e.g. TServiceCalculator._Add():
type/// service accessible via http://localhost:888/root/Calculator// - this service will run in sicShared mode// - synchronous and asynchronous methods are available, depending on use case// - synchronous _*() methods will block the browser execution, so won't be// appropriate for long process - on error, they may raise EServiceException
TServiceCalculator = class(TServiceClientAbstract)
public/// will initialize an access to the remote serviceconstructor Create(aClient: TSQLRestClientURI); override;
procedure Add(n1: integer; n2: integer;
onSuccess: procedure(Result: integer); onError: TSQLRestEvent);
function _Add(const n1: integer; const n2: integer): integer;
end;
We can therefore execute asynchronously the Add() service as such:
Of course, the synchronous code is much easier to follow and maintain. To be fair, the SmartPascallambda syntax is not difficult to read nor write. In the browser debugger, you can easily set a break point within any lambda block, and debug your code.
Note that if the server is slow to answer, your whole web application will be unresponsive, and the browser may even complain about the page, proposing the kill its process! As a consequence, simple services may be written in a synchronous manner, but your serious business code should rather use asynchronous callbacks, just as with any modern AJAX application.
Thanks to the Smart Linking feature of its compiler, only the used version of the unit will be converted to JavaScript and included in the final index.html HTML5 file. So having both synchronous and asynchronous versions of each method at hand is not an issue.
17.2.3.2. CRUD/ORM remote access
If the server did have some ORM model, its TSQLRecord classes will also be part of the mORMotClient.pas generated unit. All types, even complex record structures, will be marshaled as expected.
For instance, if you run the RegressionTestsServer.dpr server (available in the same folder), a much more complete unit could be generated from http://localhost:888/root/wrapper:
type// define some enumeration types, used below
TPeopleSexe = (sFemale, sMale);
TRecordEnum = (reOne, reTwo, reLast); type// define some record types, used as properties below
TTestCustomJSONArraySimpleArray = record
F: string;
G: arrayofstring;
H: record
H1: integer;
H2: string;
H3: record
H3a: boolean;
H3b: TSQLRawBlob;
end;
end;
I: TDateTime;
J: arrayofrecord
J1: byte;
J2: TGUID;
J3: TRecordEnum;
end;
end; type/// service accessible via http://localhost:888/root/Calculator// - this service will run in sicShared mode// - synchronous and asynchronous methods are available, depending on use case// - synchronous _*() methods will block the browser execution, so won't be// appropriate for long process - on error, they may raise EServiceException
TServiceCalculator = class(TServiceClientAbstract)
public/// will initialize an access to the remote serviceconstructor Create(aClient: TSQLRestClientURI); override;
procedure Add(n1: integer; n2: integer;
onSuccess: procedure(Result: integer); onError: TSQLRestEvent);
function _Add(const n1: integer; const n2: integer): integer;
procedureToText(Value: currency; Curr: string; Sexe: TPeopleSexe; Name: string;
onSuccess: procedure(Sexe: TPeopleSexe; Name: string); onError: TSQLRestEvent);
procedure _ToText(const Value: currency; const Curr: RawUTF8; var Sexe: TPeopleSexe; var Name: RawUTF8);
procedure RecordToText(Rec: TTestCustomJSONArraySimpleArray;
onSuccess: procedure(Rec: TTestCustomJSONArraySimpleArray; Result: string); onError: TSQLRestEvent);
function _RecordToText(var Rec: TTestCustomJSONArraySimpleArray): string;
end; /// map "People" tableTSQLRecordPeople = class(TSQLRecord)
protected
fFirstName: string;
fLastName: string;
fData: TSQLRawBlob;
fYearOfBirth: integer;
fYearOfDeath: word;
fSexe: TPeopleSexe;
fSimple: TTestCustomJSONArraySimpleArray;
// those overriden methods will emulate the needed RTTIclassfunction ComputeRTTI: TRTTIPropInfos; override;
procedure SetProperty(FieldIndex: integer; const Value: variant); override;
function GetProperty(FieldIndex: integer): variant; override;
publicproperty FirstName: stringread fFirstName write fFirstName;
property LastName: stringread fLastName write fLastName;
property Data: TSQLRawBlobread fData write fData;
property YearOfBirth: integer read fYearOfBirth write fYearOfBirth;
property YearOfDeath: word read fYearOfDeath write fYearOfDeath;
property Sexe: TPeopleSexe read fSexe write fSexe;
property Simple: TTestCustomJSONArraySimpleArray read fSimple write fSimple;
end;
In the above code, you can see several methods to the ICalculator service, some involving the complex TTestCustomJSONArraySimpleArray record type. The implementation section of the unit will in fact allow serialization of such records to/from JSON, even with obfuscated JavaScript field names.
Some enumerations types are also defined, so will help your business code be very expressive, thanks to the SmartPascal strong typing. This is a huge improvement when compared to JavaScript native weak and dynamic typing.
There is a TSQLRecordPeople class generated, which will map the following Delphi class type, as defined in the PeopleServer.pas unit:
Here, a complex TTestCustomJSONArraySimpleArray record field has been published, thanks to a manual InternalRegisterCustomProperties() registration, as we already stated above. Since SmartPascal is limited in terms of RTTI, the code generator did define some ComputeRTTI() GetProperty() and SetProperty() protected methods, which will, at runtime, perform all the properties marshalling to and from JSON. You can see that types like RawUTF8 in the original DelphiTSQLRecord were mapped to the standard SmartPascalstring type, as expected, when converted to the mORMotClient.pas generated unit.
Your AJAX client can then access to this TSQLRecordPeople content easily, via standard CRUD operations. See the SQLite3\Samples\29 - SmartMobileStudio Client sample, for instance the following line:
people := new TSQLRecordPeople;for i := 1 to 200 dobeginassert(client.Retrieve(i,people));
assert(people.ID=i);
assert(people.FirstName='First'+IntToStr(i));
assert(people.LastName='Last'+IntToStr(i));
assert(people.YearOfBirth=id+1800);
assert(people.YearOfDeath=id+1825);
end;
Here, the client variable is a TSQLRestClientURI instance, as returned by the GetClient() onSuccess callback generated in mORMotClient.pas. You have Add() Delete() Update() FillPrepare() CreateAndFillPrepare() and Batch*() methods available, ready to safely access your data from your AJAX client.
If you update your data model on the server, just re-generate your mORMotClient.pas unit from http://localhost:888/root/wrapper, then rebuild your Smart Mobile Studio project to reflect all changes made to your ORM data model, or your SOA available services.
Thanks to the SmartPascal strong typing, any breaking change of the server expectations will immediately be reported at compilation, and not at runtime, as it will with regular JavaScript clients.
18. MVC pattern
Adopt a mORMotThe mORMot framwork allows writing rich and/or web MVC applications, relying on regular ORM and SOA methods to implement its business model and its application layer, with an optional dedicated MVC model for the HTML rendering.
18.1. Model
According to the Model-View-Controller (MVC) pattern - see Model-View-Controller - the database schema should be handled separately from the User Interface.
The TSQLModel class centralizes all TSQLRecord inherited classes used by an application, both database-related and business-logic related.
See ORM Data Model for how to define the model of your application.
18.2. Views
The mORMot framework also features two kinds of User Interface generation, corresponding to the MVC Views:
For Desktop clients written in Delphi, it allows creation of Ribbon-like interfaces, with full data view and navigation as visual Grids. Reporting and edition windows can be generated in an automated way. The whole User Interface is designed in code, by some constant definitions.
For Web clients, an optimized Mustache Template engine in pure Delphi has been integrated, and allows easy creation of HTML views, with a clear MVC design.
MVC Web and Rich Clients
The Web Presentation Tier will be detailed below, but we will now present the project-wide implementation proposal.
18.2.1. Desktop clients
18.2.1.1. RTTI
The Delphi language (aka Object Pascal) provided Runtime Type Information (RTTI) more than a decade ago. In short, Runtime Type Information is information about an object's data type that is set into memory at run-time. The RTTI support in Delphi has been added first and foremost to allow the design-time environment to do its job, but developers can also take advantage of it to achieve certain code simplifications. Our framework makes huge use of RTTI, from the database level to the User Interface. Therefore, the resulting program has the advantages of very fast development (Rails-like), but with the robustness of strong type syntax, and the speed of one of the best compiler available.
In short, it allows the software logic to be extracted from the code itself. Here are the places where this technology was used:
All database structures are set in the code by normal classes definition, and most of the needed SQL code is created on the fly by the framework, before calling the SQLite3 database engine, resulting in a true Object-relational mapping (ORM) framework;
All User Interface is generated by the code, by using some simple data structures, relying on enumerations (see next paragraph);
Most of the text displayed on the screen does rely on RTTI, thanks to the Camel approach (see below), ready to be translated into local languages;
All internal Event process (such as Button press) relies on enumerations RTTI;
Options and program parameters are using RTTI for data persistence and screen display (e.g. the Settings window of your program can be created by pure code): adding an option is a matter of a few code lines.
In Delphi, enumeration types or Enum provides a way of to define a list of values. The values have no inherent meaning, and their ordinality follows the sequence in which the identifiers are listed. These values are written once in the code, then used everywhere in the program, even for User Interface generation.
For example, some tool-bar actions can be defined with:
The caption of the buttons to be displayed on the screen is then extracted by the framework using "Camel Case": the second button, defined by the paCreateNew identifier in the source code, is displayed as "Create new" on the screen, and this "Create new" is used for direct i18n of the software. For further information about "Camel Case" and its usage in Object Pascal, Java, Dot Net, Python see http://en.wikipedia.org/wiki/CamelCase
Advantages of the RTTI can therefore by sum up:
Software maintainability, since the whole program logic is code-based, and the User Interface is created from it. It therefore avoid RAD (Rapid Application Development) abuse, which mix the User Interface with data logic, and could lead into "write fast, try to maintain" scenarios;
Enhanced code security, thanks to Object Pascal strong type syntax;
Direct database access from the language object model, without the need of writing SQL or use of a MVC framework;
User Interface coherency, since most screen are created on the fly;
Easy i18n of the software, without additional components or systems.
18.2.1.2. User Interface
User Interface generation from RTTI and the integrated reporting features will be described below, during presentation of the Main Demo application design.
In short, such complex model including User Interface auto-creation could be written as such - extracted from unit FileTables.pas:
constant array, and will use TFileAction / TFileEvent enumeration types to handle the User Interface activity and Business Logic.
18.2.2. Web clients
18.2.2.1. Mustache template engine
Mustache - see http://mustache.github.io - is a well-known logic-less template engine. There is plenty of Open Source implementations around (including in JavaScript, which can be very convenient for AJAX applications on client side, for instance). For mORMot, we created the first pure Delphi implementation of it, with a perfect integration with other bricks of the framework.
Generally speaking, a Template system can be used to separate output formatting specifications, which govern the appearance and location of output text and data elements, from the executable logic which prepares the data and makes decisions about what appears in the output.
Most template systems (e.g. PHP, smarty, Razor...) feature in fact a full scripting engine within the template content. It allows powerful constructs like variable assignment or conditional statements in the middle of the HTML content. It makes it easy to modify the look of an application within the template system exclusively, without having to modify any of the underlying "application logic". They do so, however, at the cost of separation, turning the templates themselves into part of the application logic.
Mustache inherits from Google's ctemplate library, and is used in many famous applications, including the "main" Google web search, or the Twitter web site. The Mustache template system leans strongly towards preserving the separation of logic and presentation, therefore ensures a perfect MVC - Model-View-Controller - design, and ready to consume SOA services.
Mustache is intentionally constrained in the features it supports and, as a result, applications tend to require quite a bit of code to instantiate a template: all the application logic will be defined within the Controller code, not in the View source. This may not be to everybody's tastes. However, while this design limits the power of the template language, it does not limit the power or flexibility of the template system. This system supports arbitrarily complex text formatting.
Finally, Mustache is designed with an eye towards efficiency. Template instantiation is very quick, with an eye towards minimizing both memory use and memory fragmentation. As a result, it sounds like a perfect template system for our mORMot framework.
18.2.2.2. Mustache principles
There are two main parts to the Mustache template system:
Templates (which are plain text files);
Data dictionaries (aka Context).
For instance, given the following template:
<h1>{{header}}</h1>
{{#items}}
{{#first}}
<li><strong>{{name}}</strong></li>
{{/first}}
{{#link}}
<li><a href="{{url}}">{{name}}</a></li>
{{/link}}
{{/items}}
{{#empty}}
<p>The list is empty.</p>
{{/empty}}
The Mustache engine will render this data as such:
<h1>Colors</h1>
<li><strong>red</strong></li>
<li><a href="#Green">green</a></li>
<li><a href="#Blue">blue</a></li>
<p>The list is empty.</p>
In fact, you did not see any "if" nor "for" loop in the template, but Mustache conventions make it easy to render the supplied data as the expected HTML output. It is up to the MVC Controller to render the data as expected by the template, e.g. for formatting dates or currency values.
18.2.2.3. Mustache templates
The Mustache template logic-less language has five types of tags:
Variables;
Sections;
Inverted Sections;
Comments;
Partials.
All those tags will be identified with mustaches, i.e. {{...}}. Anything found in a template of this form is interpreted as a template marker. All other text is considered formatting text and is output verbatim at template expansion time.
Marker
Description
{{variable}}
The variable name will be searched recursively within the current context (possibly with dotted names), and, if found, will be written as escaped HTML. If there is no such key, nothing will be rendered.
{{{variable}}} {{& variable}}
The variable name will be searched recursively within the current context, and, if found, will be written directly, without any HTML escape. If there is no such key, nothing will be rendered.
{{#section}} ... {{/section}}
Defines a block of text, aka section, which will be rendered depending of the section variable value, as searched in the current context: - If section equals false or is an empty list[], the whole block won't be rendered; - If section is non-false but not a list, it will be used as the context for a single rendering of the block; - If section is a non-empty list, the text in the block will be rendered once for each item in the list - the context of the block will be set to the current item for each iteration.
{{^section}} ... {{/section}}
Defines a block of text, aka inverted section, which will be rendered depending of the section variable inverted value, as searched in the current context: - If section equals false or is an empty list, the whole block will be rendered; - If section is non-false or a non-empty list, it won't be rendered.
{{! comment}}
The comment text will just be ignored.
{{>partial}}
The partial name will be searched within the registered partials list, then will be executed at run-time (so recursive partials are possible), with the current execution context.
{{=...=}}
The delimiters (i.e. by default {{...}}) will be replaced by the specified characters (may be convenient when double-braces may appear in the text).
In addition to those standard markers, the mORMot implementation of Mustache features:
Marker
Description
{{helperName value}}
Expression Helper, able to change the value on the fly, before rendering. It could be used e.g. to display dates as text from TDateTime or TTimeLog values.
{{.}}
This pseudo-variable refers to the context object itself instead of one of its members. This is particularly useful when iterating over lists.
{{-index}}
This pseudo-variable returns the current item number when iterating over lists, starting counting at 1 ({{-index0}} will start counting at 0)
{{#-first}} ... {{/-first}}
Defines a block of text (pseudo-section), which will be rendered - or not rendered for inverted {{^-first}} - for the first item when iterating over lists
{{#-last}} ... {{/-last}}
Defines a block of text (pseudo-section), which will be rendered - or not rendered for inverted {{^-last}} - for the last item when iterating over lists
{{#-odd}} ... {{/-odd}}
Defines a block of text (pseudo-section), which will be rendered - or not rendered for inverted {{^-odd}} - for the odd item number when iterating over lists: it can be very useful e.g. to display a list with alternating row colors
{{<partial}} ... {{/partial}}
Defines an in-lined partial - to be called later via {{>partial}} - within the scope of the current template
{{"some text}}
This pseudo-variable will supply the given text to a callback, which will for instance transform its content (e.g. translate it), before writing it to the output
This set of markers will allow to easily write any kind of content, without any explicit logic nor nested code. As a major benefit, the template content could be edited and verified without the need of any Mustache compiler, since all those {{...}} markers will identify very clearly the resulting layout.
18.2.2.3.1. Variables
A typical Mustache template:
Hello {{name}}
You have just won {{value}} dollars!
Well, {{taxed_value}} dollars, after taxes.
Hello Chris
You have just won 10000 dollars!
Well, 6000 dollars, after taxes.
You can note that {{variable}} tags are escaped for HTML by default. This is a mandatory security feature. In fact, all web applications which create HTML documents can be vulnerable to Cross-Site-Scripting (XSS) attacks unless data inserted into a template is appropriately sanitized and/or escaped. With Mustache, this is done by default. Of course, you can override it and force to not-escape the value, using {{{variable}}} or {{& variable}}.
* Chris * * <b>GitHub</b> * <b>GitHub</b>
18.2.2.3.2. Sections
Sections render blocks of text one or more times, depending on the value of the key in the current context.
In our "wining template" above, what happen if we do want to hide the tax details? In most script languages, we may write an if ... block within the template. This is what Mustache avoids. So we define a section, which will be rendered on need.
The template becomes:
Hello {{name}}
You have just won {{value}} dollars!
{{#in_ca}}
Well, {{taxed_value}} dollars, after taxes.
{{/in_ca}}
Here, we created a new section, named in_ca.
Given the hash value of in_ca (and its presence), the section will be rendered, or not:
Sections also change the context of its inner block. It means that the section variable content becomes the top-most context which will be used to identify any supplied variable key.
Therefore, the following context will be perfectly valid: we can define taxed_value as a member of in_ca, and it will be rendered directly, since it is part of the new context.
Hello Chris You have just won 10000 dollars! Well, 6000 dollars, after taxes.
In the latest context above, there are two taxed_value variables. The engine will use the one defined by the context in the in_ca section, i.e. in_ca.taxed_value; the one defined at the root context level (which equals 3000) is just ignored.
If the variable pointed by the section name is a list, the text in the block will be rendered once for each item in the list. The context of the block will be set to the current item for each iteration. In this way we can loop over collections. Mustache allows any depth of nested loops (e.g. any level of master/details information).
The latest template makes use of the {{.}} pseudo-variable, which allows to render the current item of the list.
18.2.2.3.3. Inverted Sections
An inverted section begins with a caret (^) and ends as a standard (non-inverted) section. They may render text once, based on the inverse value of the key. That is, the text block will be rendered if the key doesn't exist, is false, or is an empty list.
Inverted sections are usually defined after a standard section, to render some message in case no information will be written in the non-inverted section:
Template
Context
Output
{{#repo}} <b>{{.}}</b> {{/repo}} {{^repo}} No repos :( {{/repo}}
{ "repo": [] }
No repos :(
18.2.2.3.4. Partials
Partials are some kind of external sub-templates which can be included within a main template, for instance to follow the same rendering at several places. Just like functions in code, they do ease template maintainability and spare development time.
Partials are rendered at runtime (as opposed to compile time), so recursive partials are possible. Just avoid infinite loops. They also inherit the calling context, so can easily be re-used within a list section, or together with plain variables.
In practice, partials shall be supplied together with the data context - they could be seen as "template context".
For example, this "main" template uses a {{> user}} partial:
In mORMot's implementations, you can also create some internal partials, defined as {{<partial}} ... {{/partial}} pseudo-sections. It may decrease the need of maintaining multiple template files, and refine the rendering layout.
For instance, the previous template may be defined at once:
The same file will define both the partial and the main template. Note that we defined the internal partial after the main template, but we may have defined it anywhere in the main template logic: internal partials definitions are ignored when rendering the main template, just like comments.
18.2.2.4. SynMustache unit
Part of our mORMot framework, we implemented an optimized Mustache template engine in the SynMustache unit:
It is the first Delphi implementation of Mustache;
It has a separate parser and renderer (so you can compile your templates ahead of time);
The parser features a shared cache of compiled templates;
It passes all official Mustache specification tests, as defined at http://github.com/mustache/spec - including all weird whitespace process;
Almost no memory allocation is performed during the rendering;
It is natively UTF-8, from the ground up, with optimized conversion of any string data;
Performance has been tuned and grounded in SynCommons.pas's optimized code;
Each parsed template is thread-safe and re-entrant;
It follows the Open/Closed principle - see SOLID design principles - so that any aspect of the process can be customized and extended (e.g. for any kind of data context);
It is perfectly integrated with the other bricks of our mORMot framework, ready to implement dynamic web sites with true Model-View-Controller design, and full separation of concerns in the views written in Mustache, the controllers being e.g. interface-based services - see Client-Server services via interfaces, and the models being our Object-Relational Mapping (ORM) classes;
API is flexible and easy to use.
18.2.2.4.1. Variables
Now, let's see some code. First, we define our needed variables:
In order to parse a template, you just need to call:
mustache := TSynMustache.Parse(
'Hello {{name}}'#13#10'You have just won {{value}} dollars!');
It will return a compiled instance of the template. The Parse() class method will use the shared cache, so you won't need to release the mustache instance once you are done with it: no need to write a try ... finally mustache.Free; end block.
You can use a TDocVariant to supply the context data (with late-binding):
Now you can render the template with this context:
html := mustache.Render(doc);
// now html='Hello Chris'#13#10'You have just won 10000 dollars!'
If you want to supply the context data as JSON, then render it, you may write:
mustache := TSynMustache.Parse(
'Hello {{value.name}}'#13#10'You have just won {{value.value}} dollars!');
html := mustache.RenderJSON('{value:{name:"Chris",value:10000}}');
// now html='Hello Chris'#13#10'You have just won 10000 dollars!'
Note that here, the JSON is supplied with MongoDB-like extended syntax (i.e. field names are unquoted), and that TSynMustache is able to identify a dotted-named variable within the execution context.
As an alternative, you could use the following syntax to create the data context as JSON, with a set of parameters, therefore easier to work with in real code storing data in variables (for instance, any string variable is quoted as expected by JSON, and converted into UTF-8):
mustache := TSynMustache.Parse(
'Hello {{name}}'#13#10'You have just won {{value}} dollars!');
html := mustache.RenderJSON('{name:?,value:?}',[],['Chris',10000]);
html='Hello Chris'#13#10'You have just won 10000 dollars!'
You can find in the mORMot.pas unit the ObjectToJSON() function which is able to transform any TPersistent instance into valid JSON content, ready to be supplied to a TSynMustache compiled instance. If the object's published properties have some getter functions, they will be called on the fly to process the data (e.g. returning 'FirstName Name' as FullName by concatenating both sub-fields).
18.2.2.4.2. Sections
Sections are handled as expected:
mustache := TSynMustache.Parse('Shown.{{#person}}As {{name}}!{{/person}}end{{name}}');
html := mustache.RenderJSON('{person:{age:?,name:?}}',[10,'toto']);
// now html='Shown.As toto!end'
Note that the sections change the data context, so that within the #person section, you can directly access to the data context person member, i.e. writing directly {{name}}
It supports also inverted sections:
mustache := TSynMustache.Parse('Shown.{{^person}}Never shown!{{/person}}end');
html := mustache.RenderJSON('{person:true}');
// now html='Shown.end'
To render a list of items, you can write for instance (using the {{.}} pseudo-variable):
mustache := TSynMustache.Parse('{{#things}}{{.}}{{/things}}');
html := mustache.RenderJSON('{things:["one", "two", "three"]}');
// now html='onetwothree'
The {{-index}} pseudo-variable allows to numerate the list items, when rendering:
External partials (i.e. standard Mustache partials) can be defined using TSynMustachePartials. You can define and maintain a list of TSynMustachePartials instances, or you can use a one-time partial, for a given rendering process, as such:
mustache := TSynMustache.Parse('{{>partial}}'#$A'3');
html := mustache.RenderJSON('{}',TSynMustachePartials.CreateOwned(['partial','1'#$A'2']));
// now html='1'#$A'23','external partials'
Internal partials (one of the SynMustache extensions), can be defined directly in the main template:
mustache := TSynMustache.Parse('{{<partial}}1'#$A'2{{name}}{{/partial}}{{>partial}}4');
html := mustache.RenderJSON('{name:3}');
// now html='1'#$A'234','internal partials'
18.2.2.4.4. Expression Helpers
Expression Helpers are an extension to the standard Mustache definition. They allow to define your own set of functions which will be called during the rendering, to transform one value from the context into a value to be rendered.
TSynMustache.HelpersGetStandardList will return a list of standard static helpers, able to convert TDateTime or TTimeLog values into text, or convert any value into its JSON representation. The current list of registered helpers are DateTimeToText, DateToText, DateFmt, TimeLogToText, BlobToBase64, JSONQuote, JSONQuoteURI, ToJSON, EnumTrim, EnumTrimRight, PowerOfTwo , Equals, If, MarkdownToHtml, SimpleToHtml and WikiToHtml. For instance, {{TimeLogToText CreatedAt}} will convert a TCreateTime field value into ready-to-be-displayed text.
The mustache tag syntax is {{helpername value}}. The supplied value parameter may be a variable name in the current context, or could be a constant number ({{helpername 123}}), a constant JSON string ({{helpername "constant text"}}), a JSON array ({{helpername [1,2,3]}}) or a JSON object ({{helpername {name:"john",age:24}}}). The value could be also a comma-seperated set of values, which will be translated into a corresponding JSON array, the values being extracted from the current context, as with {{DateFmt DateValue,"dd/mm/yyy"}}.
You could call recursively the helpers, just like you nest functions: {{helper1 helper2 value}} will call helper2 with the supplied value, which result will be passed as value to helper1.
But you can create your own list of registered Expression Helpers, even including some business logic, to compute any data during rendering, via TSynMustache.HelperAdd methods.
The helper should be implemented with such a method:
Here, the supplied Value parameter will be either from a variable of the context, or a constant, from JSON number, string, array or object - encoded as a TDocVariant custom variant type.
If the parameters were supplied as a comma-separated list, you may write multi-parameter functions as such:
classprocedureTSynMustache.DateFmt(const Value: variant; out result: variant);
beginwith_Safe(Value)^ doif (Kind=dvArray) and (Count=2) and (TVarData(Values[0]).VType=varDate) then
result := FormatDateTime(Values[1],TVarData(Values[0]).VDate) elseSetVariantNull(result);
end;
So you could use such expression helper this way:
La date courante en France est : {{DateFmt DateValue,"dd/mm/yyyy"}}
The Equals helper is defined as such:
classprocedureTSynMustache.Equals_(const Value: variant; out result: variant);
begin// {{#Equals .,12}}with_Safe(Value)^ doif (Kind=dvArray) and (Count=2) and
(SortDynArrayVariant(Values[0],Values[1])=0) then
result := true elseSetVariantNull(result);
end;
You may use it in your template to provide additional view logic:
The #If helper is even more powerful, since it allows to define some view logic, via = < > <= >= <> operators set between two values:
{{#if .,"=",6}} Welcome, number six! {{/if}}
{{#if Total,">",1000}} Thanks for your income: your loyalty will be rewarded. {{/if}}
{{#if info,"<>",""}} Warning: {{info}} {{/if}}
As an alternative, you could just put the operator without a string parameter:
{{#if .=6}} Welcome, number six! {{/if}}
{{#if Total>1000}} Thanks for your income: your loyalty will be rewarded. {{/if}}
{{#if info<>""}} Warning: {{info}} {{/if}}
This latest syntax may it pretty convenient to work with. Of course, since Mustache is expected to be a logic-less templating engine, you should better not use the #if helper in most cases, but rather add some dedicated flags in the supplied data context:
{{#isNumber6}} Welcome, number six! {{/isNumber6}}
{{#showLoyaltyMessage}} Thanks for your income: your loyalty will be rewarded. {{/#showLoyaltyMessage}}
{{#showWarning}} Warning: {{info}} {{/#showWarning}}
Helpers can be used to convert some wiki or markdown content into plain HTML, for instance, in the MVC blog sample, a ContentHtml boolean flag defines if a content (here the abstract text field) is already HTML-encoded, or if it needs to be converted via the WikiToHtml helper:
The framework also offers some built-in optional Helpers tied to its ORM, if you create a MVC web application using mORMotMVC.pas - see below - you can register a set of Expression Helpers to let your Mustache view retrieve a given TSQLRecord, from its ID, or display a given instance fields in an auto-generated table.
This will define two Expression Helpers for the specified table:
Any {{#TSQLMyRecord MyRecordID}} ... {{/TSQLMyRecord MyRecordID}}Mustache tag will read a TSQLMyRecord from the supplied ID value and put its fields in the current rendering data context, ready to be displayed in the view.
Any {{TSQLMyRecord.HtmlTable MyRecord}}Mustache tag which will create a HTML table containing all information about the supplied MyRecord fields (from the current data context), with complex field handling (like TDateTime, TTimeLog, sets or enumerations), and proper display of the field names (and i18n).
18.2.2.4.5. Internationalization
You can define {{"some text}} pseudo-variables in your templates, which text will be supplied to a callback, ready to be transformed on the fly: it may be convenient for i18n of web applications.
By default, the text will be written directly to the output buffer, but you can define a callback which may be used e.g. for text translation:
procedureTTestLowLevelTypes.MustacheTranslate(var English: string);
beginif English='Hello' then
English := 'Bonjour' elseif English='You have just won' then
English := 'Vous venez de gagner';
end;
Then, you will be able to define your template as such:
mustache := TSynMustache.Parse(
'{{"Hello}} {{name}}'#13#10'{{"You have just won}} {{value}} {{"dollars}}!');
html := mustache.RenderJSON('{name:?,value:?}',[],['Chris',10000],nil,MustacheTranslate);
// now html='Bonjour Chris'#$D#$A'Vous venez de gagner 10000 dollars!'
All text has indeed been translated as expected.
18.2.2.5. Low-level integration with method-based services
You can easily integrate the Mustache template engine with the framework's ORM. To avoid any unneeded temporary conversion, you can use the TSQLRest.RetrieveDocVariantArray() method, and provide its TDocVariant result as the data context of TSynMustache.Render().
var template: TSynMustache;
html: RawUTF8;
...
template := TSynMustache.Parse(
'<ul>{{#items}}<li>{{Name}} was born on {{BirthDate}}</li>{{/items}}</ul>');
html := template.Render(
aClient.RetrieveDocVariantArray(TSQLBaby,'items','Name,BirthDate'));
// now html will contain a ready-to-be-displayed unordered list
Of course, this TSQLRest.RetrieveDocVariantArray() method accepts an optional WHERE clause, to be used according to your needs. You may even use paging, to split the list in smaller pieces.
Following this low-level method-based services process, you can easily create a high performance web server using mORMot, following the MVC pattern as such:
But still, a lot of code is needed to glue the MVC parts.
18.2.2.6. MVC/MVVM Design
In practice, the method-based services MVC pattern is difficult to work with. You have a lot of plumbing to code by yourself, e.g. parameter marshalling, rendering or routing.
The mORMotMVC.pas unit offers a true MVVM (Model View ViewModel)design, much more advanced, which relies on interface definitions to build the application - see Interfaces:
In the MVVM pattern, both Model and View components do match the classic Model-View-Controller layout. But the ViewModel will define some kind of "model for the view", i.e. the data context to be sent and retrieved from the view.
In the mORMot implementation, interface methods are used to define the execution context of any request, following the convention over configuration pattern of our framework. In fact, the following conventions are used to define the ViewModel:
ViewModel
mORMot
Route
From the interface name and its method name
Command
Defined by the method name
Controller
Defined by the method implementation
ViewModel Context
Transmitted by representation, as JSON, including complex values like TSQLRecord, records, dynamic arrays or variants (including TDocVariant)
Input Context
Transmitted as method input parameters (const/var) from the View
Output Context
Method output parameters (var/out) are sent to the View
Actions
A method will render the associated view with the output parameters, or go to another command (optionally via EMVCApplication)
This may sound pretty unusual (if you are coming from a RubyOnRails, AngularJS, Meteor or .Net implementations), but it has been identified to be pretty convenient to use. Main benefit is that you do not need to define explicit data structures for the ViewModel layer. The method parameters will declare the execution context for you at interface level, ready to be implemented in a TMVCApplication class. In practice, this implementation uses the interface input and output parameters are an alternate way to define the $scope content of an AngularJS application.
The fact that the ViewModel data context is transmitted as JSON content - by representation just like REST - see REST - allows nice side effects:
Views do not know anything about the execution context, so are very likely to be uncoupled from any business logic - this will enhance security and maintainability of your applications;
You can optionally see in real time the JSON data context (by using a fake root/methodname/json URI) of a running application, for easier debugging of the Controller or the Views;
You can test any View by using fake static JSON content, without the need of a real server;
In fact, Views could be even not tied to the web model, but run in a classic rich application, with VCL/FMX User Interface (we still need to automate the binding process to UI components, but this is technically feasible, whereas almost no other MVC web framework do support this);
In the Controller code, you have access to the mORMot ORM methods and services to write the various commands, making it pretty easy to implement a web front-end to any SOA project (also sharing a lot of high-level Domain types);
The associated data Model is mORMot's ORM, which is also optimized for JSON processing, so most of memory fragmentation is reduced to the minimum during the rendering (see e.g. the use of RawJSON below);
The Controller will be most of the time hosted within the web server application, but may be physically hosted in another remote process - this remote Controller service may even be shared between web and VCL/FMX clients;
Several levels of cache could be implemented, based on the JSON content, to leverage the server resources and scale over a huge number of clients.
The next chapter will uncover how to build such solid MVC / MVVM Web Applications using mORMot.
19. MVC/MVVM Web applications
Adopt a mORMotWe will now explain how to build a MVC/MVVM web application using mORMot, starting from the "30 - MVC Server" sample. Following explanations may be a bit unsynchronized from the current state of the sample source code in the "unstable" branch of the framework repository, but you will get here below the main intangible points.
This little web application publishes a simple BLOG, not fully finished yet (this is a Sample, remember!). But you can still execute it in your desktop browser, or any mobile device (thanks to a simple Bootstrap-based responsive design), and see the articles list, view one article and its comments, view the author information, log in and out.
Then the whole database model will be created in this function:
function CreateModel: TSQLModel;
begin
result := TSQLModel.Create([TSQLBlogInfo,TSQLAuthor,
TSQLTag,TSQLArticle,TSQLComment,TSQLArticleSearch],'blog');
TSQLArticle.AddFilterNotVoidText(['Title','Content']);
TSQLComment.AddFilterNotVoidText(['Title','Content']);
TSQLTag.AddFilterNotVoidText(['Ident']);
result.Props[TSQLArticleSearch].FTS4WithoutContent(TSQLArticle);
end;
As you can discover:
We used class inheritance to gather properties for similar tables;
Some classes are not part of the model, since they are just abstract parents, e.g. TSQLContent is not part of the model, but TSQLArticle and TSQLComment are;
We defined some regular one-to-one relationships, e.g. every Content (which may be either an Article or a Comment) will be tied to one Author - see "One to one" or "One to many";
We defined some regular one-to-many relationships, e.g. every Comment will be tied to one Article;
Article tags are stored as a dynamic array of integer within the record, and not in a separated pivot table: it will make the database smaller, and queries faster (since we avoid a JOIN);
Some properties are defined (and stored) twice, e.g. TSQLContent defines one AuthorName field in addition to the Author ID field, as a convenient direct access to the author name, therefore avoiding a JOINed query at each Article or a Comment display - see Shared nothing architecture (or sharding);
We defined the maximum expected width for text fields (e.g. via Title: RawUTF8index 80), even if it won't be used by SQLite3 - it will ease any eventual migration to an external database, in the future - see Code-first ORM;
Some validation rules are set using TSQLArticle.AddFilterNotVoidText() method, which will be applied before an article is stored in the controller's code (in TBlogApplication. ArticleCommit);
The whole application will run without writing any SQL, but just high-level ORM methods;
Even if we want to avoid writing SQL, we tried to modelize the data to fit regular RDBMS expectations, e.g. for most used queries (like the one run from the main page of the BLOG);
Full Text indexation data, implemented as FTS3/FTS4/FTS5 in the SQLite3 engine, is stored in a dedicated TSQLArticleSearch table - see FTS4 index tables without content for details about this powerful feature.
Foreign keys and indexes are managed as such:
The TSQLRecord.IDprimary key of any ORM class will be indexed;
For both one-to-one and one-to-many relationships, indexes are created by the ORM: for instance, TSQLArticle.Author and TSQLComment.Author will be indexed, just as TSQLComment.Article;
A SQL index will be needed for TSQLArticle.PublishedMonth field, which is used to display a list of publication months in the main BLOG page, and link to the corresponding articles. The following code will take care of it:
classprocedure TSQLArticle.InitializeTable(Server: TSQLRestServer;
const FieldName: RawUTF8; Options: TSQLInitializeTableOptions);
begininherited;
if (FieldName='') or (FieldName='PublishedMonth') then
Server.CreateSQLIndex(TSQLArticle,'PublishedMonth',false);
end;
19.1.2. Hosted in a REST server over HTTP
The ORM is defined to run over a SQLite3 database in the main MVCServer.dpr program, then served via a HTTP server as defined in MVCServer.dpr:
In comparison to a regular Client-Server process, we instantiated a TBlogApplication, which will inject the MVC behavior to aServer and aHTTPServer. The same mORMot program could be used as a RESTful server for remote Object-Relational Mapping (ORM) and Service-Oriented Architecture (SOA), and also for publishing a web application, sharing the same data and business logic, over a single HTTP URI and port. A call to RootRedirectToURI() will let any http://server:8092 HTTP request be redirected to http://server:8092/blog/default which is our BLOG application main page. The other URIs could be used as usual, as any mORMot's JSON RESTful Client-Server.
You could also use sub-domain hosting, as defined for Network and Internet access via HTTP, to make a difference between the REST methods and the MVC web site. For instance, you may define some per domain / per sub-domain hosting redirection:
aHttpServer.DomainHostRedirect('rest.project.com','root'); // 'root' is current Model.Root
aHttpServer.DomainHostRedirect('project.com','root/html'); // call the Html() method
aHttpServer.DomainHostRedirect('www.project.com','root/html'); // call the Html() method
aHttpServer.DomainHostRedirect('blog.project.com','root/blog'); // MVC application
All ORM/SOA activity should be accessed remotely via rest.project.com, then will be handled as expected by the ORM/SOA methods of the TSQLRestServer instance. For proper AJAX / JavaScript process, you may have to write:
This method will serve some static HTML content as the main front end page of this server connected to the Internet. For best performance, this UTF-8 content is cached in memory, and the HTTP 304 command will be handled, if the browser supports it. Of course, your application may return some more complex content, even serving a set of files hosted in a local folder, e.g. by calling Ctxt.ReturnFile() or Ctxt.ReturnFileFromFolder() methods in this Html() service:
procedure TMyServer.Html(Ctxt: TSQLRestServerURIContext);
begin
Ctxt.ReturnFileFromFolder('c:\www');
end;
This single method will search for any matching file in the local c:\www folder and its sub-directories, returning the default index.html content if no file is specified at URI level. See the optional parameters to the Ctxt.ReturnFileFromFolder() method for proper tuning, e.g. to change the default file name or disable the HTTP 304 answers. In all cases, the file content will be served by the High-performance http.sys server directly from the kernel mode, so will be very fast.
In order to have the BLOG content hosted in root/blog URI, you should specify the expected sub-URI when initializing your TMVCApplication:
procedure TBlogApplication.Start(aServer: TSQLRestServer);
begin
...
fMainRunner := TMVCRunOnRestServer.Create(self,nil,'blog').
...
Here, any request to blog.project.com will be redirected to root/blog, so will match the expected TBlogApplication URIs. Note that by default, TMVCRunOnRestServer.RunOnRestServerSub will redirect any root/blog request to root/blog/default, so this URI will be transparent for the user.
19.2. MVCViewModel
19.2.1. Defining the commands
The MVCViewModel.pas unit defines the Controller (or ViewModel) of the "30 - MVC Server" sample application. It uses the mORMotMVC.pas unit , which is the main MVC kernel for the framework, allowing to easily create Controllers binding the ORM/SOA features (mORMot.pas) to the Mustache Views (SynMustache.pas).
First of all, we defined an interface, with the expected methods corresponding to the various commands of the web application:
IBlogApplication = interface(IMVCApplication)procedure ArticleView(
ID: integer; var WithComments: boolean; Direction: integer;
out Article: TSQLArticle; out Author: TSQLAuthor;
out Comments: TObjectList);
procedure AuthorView(var ID: integer; out Author: variant; out Articles: variant);
function Login(const LogonName,PlainPassword: RawUTF8): TMVCAction;
function Logout: TMVCAction;procedure ArticleEdit(var ID: integer; const Title,Content: RawUTF8;
const ValidationError: variant;
out Article: TSQLArticle);
function ArticleCommit(
ID: integer; const Title,Content: RawUTF8): TMVCAction;
end;
As such, the IBlogApplication will define the following web pages, corresponding to each of its methods: Default, Error, ArticleView, AuthorView, Login, Logout, ArticleEdit and ArticleCommit. Each command of this application will map an URI, e.g. /blog/default or /blog/login - remember that our model defined 'blog' as its root URI. You may let all commands be accessible from a sub-URI (e.g. /blog/web/default), but here this is not needed, since we are creating a "pure web" application.
Each command will have its own View. For instance, you will find Default.html, Error.html or ArticleView.html in the "Views" sub-folder of the sample. If you did not supply any file in this folder, some void files will be created.
Incoming method parameters of each method (i.e. defined as const or var) will be transmitted on the URI, encoded as regular HTTP parameters, whereas outgoing method parameters (i.e. defined as var or out) will be transmitted to the View, as data context for the rendering. Simple types are transmitted (like integer or RawUTF8); but you will also find ORM classes (like TSQLAuthor), an outgoing TObjectList, or some variant - which may be either values or a complex TDocVariant custom variant type.
In fact, you may find out that the Login, Logout and ArticleCommit methods do not have any outgoing parameters, but were defined as function returning a TMVCAction record. This type is declared as such in mORMotMVC.pas:
TMVCAction = record
RedirectToMethodName: RawUTF8;
RedirectToMethodParameters: RawUTF8;
ReturnedStatus: cardinal;
end;
Any method returning a TMVCAction content won't render directly any view, but will allow to go directly to another method, for proper rendering, just by providing a method name and some optional parameters. Note that even the regular views, i.e. the methods which do not have this TMVCAction parameter, may break the default rendering process on any error, raising an EMVCApplication exception which will in fact redirect the view to another page, mainly the Error page.
To better understand how it works, run the "30 - MVC Server" sample. Remember that to be able to register the port #8092 for the http.sys server, you will need to run the MVCServer.exe program at least once with Windows Administrator rights - see URI authorization as Administrator. Then point your browser to http://localhost:8092/ - you will see the main page of the BLOG, filled with some random data. Quite some "blabla", to be sincere!
What you see is the Default page rendered. The IBlogApplication.Default() method has been called, then the outgoing Scope data has been rendered by the Default.htmlMustache template.
If you click on an article title, it will go to http://localhost:8092/blog/articleView?id=99 - i.e. calling IBlogApplication.ArticleView() with the ID parameter containing 99, and other incoming parameters (i.e. WithComments and Direction) set to their default value (i.e. respectively false and 0). The ArticleView() method will then read the TSQLArticle data from the ORM, then send it to the ArticleView.htmlMustache template.
Now, just change in your browser the URI from http://localhost:8092/blog/articleView?id=99 (here we clicked on the Article with ID=99) into http://localhost:8092/blog/articleView/json?id=99 (i.e. entering /articleView/json instead of /articleView, as a fake sub-URI). Now the browser is showing you the JSON data context, as transmitted to the ArticleView.html template. Just check both the JSON content and the corresponding Mustache template: I think you will find out how it works. Take a look at Mustache template engine as reference.
mORMot MVC/MVVM URI - Commands sequenceIn this diagram, we can see that each HTTP request is stateless, uncoupled from the previous. The user experience is created by changing the URI with additional parameters (like withComments=true). This is how the web works.
Then try to go to http://localhost:8092/blog/mvc-info - and check out the page which appears. You will get all the information corresponding to your application, especially a list of all available commands:
You may use this page as reference when writing your Mustache Views. It will reflect the exact state of the running application.
19.2.2. Implementing the Controller
To build the application Controller, we will need to implement our IBlogApplication interface.
TBlogApplication = class(TMVCApplication,IBlogApplication)
...
publicprocedure Start(aServer: TSQLRestServer); reintroduce;
procedureDefault(var Scope: variant);
procedure ArticleView(ID: integer; var WithComments: boolean;
Direction: integer;
out Article: TSQLArticle; out Author: variant;
out Comments: TObjectList);
...
end;
We defined a new class, inheriting from TMVCApplication - as defined in mORMotMVC.pas, and implementing our expected interface. TMVCApplication will do all the low-level plumbing for you, using a set of implementation classes.
Let's implement a simple command:
procedure TBlogApplication.AuthorView(var ID: integer; out Author: TSQLAuthor;
out Articles: variant);
begin
RestModel.Retrieve(ID,Author);
if Author.ID<>0 then
Articles := RestModel.RetrieveListJSON(
TSQLArticle,'Author=? order by id desc limit 50',[ID],ARTICLE_FIELDS) elseraiseEMVCApplication.CreateGotoError(HTTP_NOTFOUND);
end;
By convention, all parameters are allocated when TMVCApplication will execute a method. So you do not need to allocate or handle the Author: TSQLAuthor instance lifetime. You have direct access to the underlying TSQLRest instance via TMVCApplication.RestModel: so all CRUD operations are available. You can let the ORM do the low level SQL work for you: to retrieve all information about one TSQLAuthor and get the list of its associated articles, we just use a TSQLRest method with the appropriate WHERE clause. Here we returned the list of articles as a TDocVariant, so that they will be transmitted as a JSON array, without any intermediate marshalling to TSQLArticle instances, but with the Tags dynamic array published property returned as an array of integers (you may have used TObjectList or RawJSON instead, as will be detailed below). In case of any error, an EMVCApplication will be raised: when such an exception happens, the TMVCApplication will handle and convert it into a page change, and a redirection to the IBlogApplication.Error() method, which will return an error page, using the Error.html view template.
procedure TBlogApplication.ArticleView(
ID: integer; var WithComments: boolean; Direction: integer;
out Article: TSQLArticle; out Author: variant; out Comments: TObjectList);
var newID: TID;
const WHERE: array[1..2] ofPUTF8Char = (
'ID<? order by id desc','ID>? order by id');
beginif Direction in [1,2] then// allows fast paging using index on IDif RestModel.OneFieldValue(TSQLArticle,'ID',WHERE[Direction],[],[ID],newID) and
(newID<>0) then
ID := newID;
RestModel.Retrieve(ID,Article);
if Article.ID<>0 thenbegin
Author := RestModel.RetrieveDocVariant(
TSQLAuthor,'ID=?',[Article.Author.ID],'FirstName,FamilyName');
if WithComments thenbegin
Comments.Free; // we will override the TObjectList created at input
Comments := RestModel.RetrieveList(TSQLComment,'Article=?',[Article.ID]);
end;
endelseraiseEMVCApplication.CreateGotoError(HTTP_NOTFOUND);
end;
This method has to manage several use cases:
Display an Article from the database;
Retrieve the Author first name and family name;
Optionally display the associated Comments;
Optionally get the previous or next Article;
Trigger an error in case of an invalid request.
Reading the above code is enough to understand how those 5 features are implemented in this method. The incoming parameters, as triggered by the Views, are used to identify the action to be taken. Then TMVCApplication.RestModel methods are used to retrieve the needed information directly from the ORM. Outgoing parameters (Article,Author,Comments) are transmitted to the MustacheView, for rendering.
In fact, there are several ways to retrieve your data, using the RestModel ORM methods. For instance, in the above code, we used a TObjectList to transmit our comments. But we may have used a TDocVariant custom variant type parameter:
procedure TBlogApplication.ArticleView(
ID: integer; var WithComments: boolean; Direction: integer;
out Article: TSQLArticle; out Author: variant; out Comments: variant);
...
if WithComments then
Comments := RestModel.RetrieveDocVariantArray(TSQLComment,'','Article=?',[Article.ID],'');
In this case, data will be returned per representation, as variant values. Any dynamic array properties will be identified in the TSQLRecord, and converted as proper array of values.
procedure TBlogApplication.ArticleView(
ID: integer; var WithComments: boolean; Direction: integer;
out Article: TSQLArticle; out Author: variant; out Comments: RawJSON);
...
if WithComments then
Comments := RestModel.RetrieveListJSON(TSQLComment,'Article=?',[Article.ID],'');
Using a RawJSON will be in fact the fastest way of processing the information on the server side. But it will return the data directly from the database - as a consequence, dynamic arrays will be returned as a Base64-encoded blob.
It is up to you to choose the method and encoding needed for your exact context. If your purpose is just to retrieve some data and push it back to the view, RawJSON is fast, but a TDocVariant will also convert dynamic arrays to a proper JSON array. If you want to process the returned information with some business code, returning a TObjectList may be convenient if you need to run some TSQLRecord methods on the returned list.
or a TDocVariant array may have its needs, if you want to create some meta-object gathering all information, e.g. for Scope as returned by the Default method:
procedure TBlogApplication.Default(var Scope: variant);
...
ifnot fDefaultData.AddExistingProp('Archives',Scope) then
fDefaultData.AddNewProp('Archives',RestModel.RetrieveDocVariantArray(
TSQLArticle,'','group by PublishedMonth order by PublishedMonth desc limit 12',[],
'distinct(PublishedMonth),max(ID)+1 as FirstID'),Scope);
end;
You can notice how the calendar months are retrieved from the database, using a safe fDefaultData: ILockedDocVariant private field to store the value as cache, in a thread-safe manner (we will see later more about how to implement thread-safety). If the 'Archives' value is in the fDefaultData cache, it will be returned immediately as part of the Scope returned document. Otherwise, it will use RestModel.RetrieveDocVariantArray to retrieve the last 12 available months. When a new Article is created, or modified, TBlogApplication.FlushAnyCache will call fDefaultData.Clear to ensure that the updated information will be retrieved from the database on next Default() call.
The above ORM request will generate the following SQL statement:
SELECT distinct(PublishedMonth),max(ID)+1 as FirstID FROM Article
group by PublishedMonth order by PublishedMonth desc limit 12
The Default() method will therefore return the following JSON context:
... which will be processed by the Mustache engine. If you put a breakpoint at the end of this Default() method, and inspect the "Scope" variable, the Delphi debugger will in fact show you in real time the exact JSON content, retrieved from the ORM.
I suspect you just find out how mORMot's ORM/SOA abilites, and JSON / TDocVariant offer amazing means of processing your data. You have the best of both worlds: ORM/SOA gives you fixed structures and strong typing (like in C++/C#/Java), whereas TDocVariant gives you a flexible object scheme, using late-binding to access its content (like in Python/Ruby/JavaScript).
19.2.3. Variable input parameters
If you want to support a variable number of named parameters, you can define a variant input parameter, and provide the input as a JSON document, using a TDocVariant storage. But marshalling the context as JSON will involve using some JavaScript in the HTML page, which may not be very convenient.
If you want to handle a non-fixed set of regular URI or POST value, you can prefix all the incoming parameter names with the dotted name of a single defined variant. For instance, if you have the following controller method:
function TnWebMVCMenu.CadastroSalvar3(const p: variant): TMVCAction;
Then you can supply as parameter at URI level:
p.a1=5&p.a2=dfasdfa
And you will be able to handle them in the controller body:
function TnWebMVCMenu.CadastroSalvar3(const p: variant): TMVCAction;
begin
GotoView(result,'Cadastro',
['pp1',p.a1,
'pp2',p.a2])
end;
You are now free to specify some versatile HTML forms in your views, and provide the controller with any kind of input parameters. Of course, it may sound safer and easier to explicitly define and name each one of the input parameters, with simple types like integer or RawUTF8. But this convention may help you work with any kind of HTML views.
19.2.4. Using Services in the Controller
Any controller method could retrieve and execute any dependency from its interface, following the IoC pattern - see Dependency Inversion Principle. You have two ways of performing the dependency resolution:
In fact, you can set up your TMVCApplication instance to use any external dependencies, including stubs and mocks, or high-level DDD services (e.g. repository or modelization process), using its CreateInjected() constructor instead of plain Create.
19.2.5. Controller Thread Safety
When run from a TSQLRestServer instance, our MVC application commands will be executed by default without any thread protection. When hosted within a TSQLHttpServer instance - see High-performance http.sys server - several threads may execute the same Controller methods at the same time. It is therefore up to your application code to ensure that your TMVCApplication process is thread safe.
Note that by design, all TMVCApplication.RestModel ORM methods are thread-safe. If your Controller business code only uses ORM methods, sending back the information to the Views, without storing any data locally, it will be perfectly thread safe. See for instance the TBlogApplication.AuthorView method we described above.
But consider this method (simplified from the real "30 - MVC Server" sample):
type
TBlogApplication = class(TMVCApplication,IBlogApplication)
protected
fDefaultArticles: variant;
... procedure TBlogApplication.Default(var Scope: variant);
beginif VarIsEmpty(fDefaultArticles) then
fDefaultArticles := RestModel.RetrieveDocVariantArray(
TSQLArticle,'','order by ID desc limit 20',[],ARTICLE_FIELDS);
_ObjAddProps(['Articles',fDefaultArticles],Scope);
end;
In fact, even if this method may sound safe, we have an issue when it is executed by several threads: one thread may be assigning a value to fDefaultArticles, whereas another thread may be using the fDefaultArticles content. This may result into random runtime errors, very difficult to solve. Even creating a local variable may not be safe, since any access to fDefaultArticles should be protected.
A first - and brutal - solution could be to force the TSQLRestServer instance to execute all method-based services (including our MVC commands) in a giant lock, as stated about Thread-safety:
aServer.AcquireExecutionMode[execSOAByMethod] := amLocked; // or amBackgroundThread
But this may slow down the whole server process, and reduce its scaling abilities.
You could also lock explicitly the Controller method, for instance:
procedure TBlogApplication.Default(var Scope: variant);
beginLocker.ProtectMethod;if VarIsEmpty(fDefaultData) then
...
Using the TMVCApplication.Locker: IAutoLocker is a simple and efficient way of protecting your method. In fact, TAutoLocker class' ProtectMethod will return an IUnknown variable, which will let the compiler create an hidden try .. finally block in the method body, to release the lock when it quits. But this locker will be shared by the whole TMVCApplication instance, so will be like a giant lock on your Controller process.
A more tuned and safe implementation may be to use a ILockedDocVariant instead of a plain TDocVariant for caching the data. You may therefore write:
type
TBlogApplication = class(TMVCApplication,IBlogApplication)
protected
fDefaultData: ILockedDocVariant;
...
procedure TBlogApplication.Start(aServer: TSQLRestServer);
begin
fDefaultData := TLockedDocVariant.Create;
... procedure TBlogApplication.Default(var Scope: variant);
beginifnot fDefaultData.AddExistingProp('Articles',Scope) then
fDefaultData.AddNewProp('Articles',RestModel.RetrieveDocVariantArray(
TSQLArticle,'','order by ID desc limit 20',[],ARTICLE_FIELDS),Scope);
end;
Using ILockedDocVariant will ensure that only access to this resource will be locked (no giant lock any more), and that slow ORM process (like RestModel.RetrieveDocVariantArray) will take place lock-free, to maximize the resource usage. This is in fact the pattern used by the "30 - MVC Server" sample. Even Client-Server services via interfaces may benefit from this TLockedDocVariant kind of storage, for efficient multi-thread process - see Server-side execution options (threading).
19.2.6. Web Sessions
Sessions are usually implemented via cookies, in web sites. A login/logout procedure enhances security of the web application, and User experience can be tuned via small persistence of client-driven data. The TMVCApplication class allows creating such sessions.
You can store whatever information you need within the client-side cookie. TMVCSessionWithCookies allows to define a record, which will be used to store the information as optimized binary, in the browser cache. You can use this cookie information as a cache to the current session, e.g. storing the logged user display name, his/her preferences or rights - avoiding a round trip to the database. Of course, you should never trust the cookie content (even if our format uses secure encryption, and a digital signature via a HMAC-CRC32C algorithm). But you can use it as a convenient cache, always checking the real data in the database when you are about to perform any security-related action. The cookie also stores an integer Session ID, and issuing and expiration dates: as such, it matches all JWT (Javascript Web Token) - see http://jwt.io - features, as signature, encryption, and jwi/iat/exp claims, with a smaller overhead, and without using unsafe Web Local Storage.
For our "30 - MVC Server" sample application, we defined the following record in MVCViewModel.pas:
As raw binary, without the field names, within the cookie, after Base64 encoding of encrypted and digitally signed data;
As a JSON object, with explicit field names, when transmitted to the Views as "Session" data context.
In order to have proper JSON serialization of the record, you will need to specify its structure, if you use a version of Delphi without the new RTII (i.e. before Delphi 2010) - see Record serialization.
As you can see, this Login() method will be triggered from http://localhost:8092/blog/login with LogonName=...&plainpassword=... parameters. It will first check that there is no current session, retrieve the ORM Author corresponding to the LogonName, check the supplied password, and set the SessionInfo: TCookieData structure with the needed information. A call to CurrentSession.Initialize() will compute the cookie, then prepare to send it to the client browser.
The Login() method returns a TMVCAction structure. As a consequence, the call to GotoDefault(result) will let the TMVCApplication processor render the Default() method, as if the /blog/default URI will have been requested. On invalid credential, an error page is displayed instead.
When a web page is computed, the following overridden method will be executed:
function TBlogApplication.GetViewInfo(MethodIndex: integer): variant;
begin
result := inherited GetViewInfo(MethodIndex);
_ObjAddProps(['blog',fBlogMainInfo,'session',CurrentSession.CheckAndRetrieveInfo(TypeInfo(TCookieData))],result);end;
It will append the session information from the cookie to the returned View data context, as such:
Here, the session object will contain the TCookieData information, ready to be processed by the Mustache View - e.g. as session.AuthorName. In addition, your view may include some buttons for logged-only features, like comments or content edition, using boolean fields defined in session.AuthorRights.
For security reasons, before actually performing an action requiring a specific right, it is preferred to check from the Model if the user is effectively allowed. An attacker may have forged a fake cookie - even if it is very unlikely, since cookies are encrypted and signed. It is a good approach to treat all cookies information as an unsafe cache, acceptable for most operation, but which should always be dual-checked. So your server code will call CurrentSession.CheckAndRetrieve then access the data RestModel for verification before any sensitive action is performed. Defining a common method could be handy:
function TBlogApplication.GetLoggedAuthorID(Right: TSQLAuthorRight;
ContentToFillAuthor: TSQLContent): TID;
var SessionInfo: TCookieData;
author: TSQLAuthor;
begin
result := 0;
if (CurrentSession.CheckAndRetrieve(@SessionInfo,TypeInfo(TCookieData))>0) and(Right in SessionInfo.AuthorRights) thenwith TSQLAuthor.AutoFree(author,RestModel,SessionInfo.AuthorID) doif Right in author.Rights thenbegin
result := SessionInfo.AuthorID;
if ContentToFillAuthor<>nilthenbegin
ContentToFillAuthor.Author := pointer(result);
ContentToFillAuthor.AuthorName := author.LogonName;
end;
end;
end;
It will be used as such, e.g. to verify if a user can comment an article:
Eventually, when the browser asks for the /blog/logout URI, the following method will be executed:
function TBlogApplication.Logout: TMVCAction;
beginCurrentSession.Finalize;
GotoDefault(result);
end;
The session cookie will then be deleted on the browser side.
Note that if any deprecated or invalid cookie is detected by the mORMot MVC server, it will also be automatically deleted on the browser side.
19.3. Writing the Views
See Mustache template engine for a description of how rendering take place in this MVC/MVVM application. You will find the Mustache templates in the "Views" sub-folder of the "30 - MVC Server" sample application.
You will find some *.html files, one per command expecting a View, and some *.partial files, which are some kind of re-usable sub-templates - we use them to easily compute the page header and footer, and to have a convenient way of gathering some piece of template code, to be re-used in several *.html views.
The {{>partials}} are easily identified, as other {{...}} value tags. The main partial is {{>articlerow}}, which will display all articles list. {{{WikiToHtml main.blog.about}}} is an Expression Block able to render some simple text into proper HTML, using a simple Wiki syntax. {{MonthToText PublishedMonth}} will execute a custom Expression Block, defined in our TBlogApplication, which will convert the obfuscated TSQLArticle.PublishedMonth integer value into the corresponding name and year:
procedure TBlogApplication.MonthToText(const Value: variant;
out result: variant);
const MONTHS: array[0..11] ofstring = (
'January','February','March','April','May','June','July','August',
'September','October','November','December');
var month: integer;
beginifVariantToInteger(Value,month) and (month>0) then
result := MONTHS[month mod 12]+' '+IntToStr(month div 12);
end;
The page displaying the Author information is in fact quite simple:
It will share the same {{>partials}}, for a consistent and maintainable web site design, but in fact most of the process will take place by the magic of two tags:
{{{TSQLAuthor.HtmlTable Author}}} is an Expression Block linked to TMVCApplication.RestModelORM, which will create a HTML table - with the syntax expected by our BootStrapCSS - for a TSQLAuthor record, identifying the property types and display them as expected (e.g. for dates or time stamps, or for enumerates or sets).
{{>articlerow}} is a partial also shared with ArticleView.html, which will render a list of TSQLArticle encoded as {{#Articles}}...{{/Articles}} sections.
Take a look at the mORMotMVC.pas unit: you will discover that every aspect of the MVC process has been divided into small classes, so that the framework is able to create web applications, but also any kind of MVC applications, including mobile or VCL/FMX apps, and/or reporting - using mORMotReport.pas.
20. Hosting
Adopt a mORMotWe could identify several implementation patterns of a mORMot server and its clients:
Stand-alone application, either in the same process or locally on the same computer;
Private self-hosting, e.g. in a corporate network, with a mORMot executable or service publishing some content to clients locally or over the Internet (directly from a DMZ or via a VPN);
Cloud hosting, using a dedicated server in a data-center, or any cloud solution based on virtualization;
Mixed hosting, using CDN network services to cache most of the requests of your mORMot server.
As we already stated, our Client-Server process allow all these patterns. We will now detail some hosting schemes.
20.1. Windows and Linux hosted
The current version of the framework fully supports deploying the mORMot servers on the Windows platform, either as a Win32 executable, or - for latest versions of the Delphi compiler - as a Win64 executable.
Linux support (via FPC 3.2.x) is available, but we face some FPC compiler-level issue with FPC 2.x, which does not supply the needed interface RTTI - see http://bugs.freepascal.org/view.php?id=26774 - so that the SOA and MVC features are not working directly on old FPC revisions, so you need to generate the RTTI from a Delphi compiler, as stated below. For the client side, there is no limitation, thanks to our Cross-Platform clients, which is perfectly supported even by oldest FPC compiler under Linux. The Linux backend available in latest Delphi is not supported, since FPC 2.x gives pretty good results (we use it on production since years), and a Delphi Entreprise licence is required to access it - which we don't have.
In practice, a mORMot server expects much lower hardware requirements (in CPU, storage and RAM terms) than a regular IIS-WCF-MSSQL-.Net stack. And it requires almost no maintenance.
As a consequence, the potential implementation schemes could be hosted as such:
Stand-alone application, without any explicit server;
Self-hosted service running on the corporate file server, or on a small dedicated VM or recycled computer (for best performance, just put your data on a new SSD on the old hardware PC);
Cloud services running Windows Server, with minimal configuration: IIS, .Net or MS SQL are not necessary at all - a cheap virtual system with 512 MB of memory is enough to run your mORMot service and serve hundredths of clients;
Linux servers, with no dependency (even latest version of SQlite3 is statically linked to the executables), using less hardware resource.
About the edition of Windows to be used, of course IT people will ensure you that Windows Server is mandatory. But from our tests, you will obtain pretty good results, even with a regular Windows 7 or 8 version of the operating system. On the other side, it is not serious to envisage hosting a server on Windows XP, which is not supported any more by Microsoft - even if technically a mORMot server will work very well on this deprecated platform.
Of course, if you use External SQL database access, the hardware and hosting expectations may vary. It will depend on the database back-end used, and will necessarily be much more demanding than our internal SQLite3 database engine. In practice, a mORMot server using a SQLite3 engine running on a SSD hardware, in lmExclusive mode - see ACID and speed - runs faster than most SQL or NoSQL engines available, since it will be hosted within the mORMot server process itself - see Highly concurrent clients performance.
20.2. Deployment Architecture
About hosting architecture, the easiest is to have your main TSQLRestServer class handling the service, in conjunction with other Client-Server process (like ORM). See General mORMot architecture - Client / Server about this generic Client-Server architecture and the "Shared server" next paragraph.
But you may find out some (good?) reasons which main induce another design:
For better scalability, you should want to use a dedicated process (or even dedicated hardware) to split the database and the service process;
For security reasons, you want to expose only services to your Internet clients, and setup a DMZ hosting only services, and separate database with logic instance;
Services are not the main part of your business, and you would like to enable or disable the published SOA scope, on demand;
To implement an efficient solution for the most complex kind of application, as provided by Domain-Driven Design;
Your main data will be hosted on high performance SSD / NAS drives with safe RAID, but some data should better be hosted on cheaper storage (e.g. Audit Trail for change tracking);
You are selling one product, to be run on several environments (debugging / production, starter / corporate editions, centralized / P2P design...), depending on your clients demand;
Whatever your IT people or managers want mORMot to.
The possibilities are endless, so we will here below only present some typical use-cases.
20.2.1. Shared server
This is the easiest configuration: one HTTP server instance, which serves both ORM and Services. On practice, this is perfectly working and scalable.
Service Hosting on mORMot - shared serverYou can tune this solution, as such:
Setting the group user rights properly - see below - you can disable the remote ORM access from the Internet, for the AJAX Clients - but allow rich Delphi clients (like PC1) to access the ORM;
You can have direct in-process access to the service interfaces from the ORM, and vice-versa: if your services and ORM are deeply inter-dependent, direct access will be the faster solution.
20.2.2. Two servers
In this configuration, two physical servers are available:
A network DMZ is opened to serve only service content over the Internet, via "HTTP server 2";
Then on the local network, "HTTP server 1" is used by both PC 1 and Services to access the ORM;
Both "PC Client 1" and the ORM core are able to connect to Services via a dedicated "HTTP server 3".
Service Hosting on mORMot - two servers
Of course, the database will be located on "PC Server internal", i.e. the one hosting the ORM, and the Services will be one regular client: so we may use CRUD level cache on purpose to enhance performance. In order to access the remote ORM features, and provide a communication endpoint to the embedded services, a TSQLRestServerRemoteDB kind of server class can be used.
20.2.3. Two instances on the same server
This is the most complex configuration. In this case, only one physical server is deployed:
A dedicated "HTTP server 2" instance will serve service content over the Internet (via a DMZ configuration of the associated network card);
"PC Client 1" will access to the ORM via "HTTP server 1", or to services via "HTTP server 3";
For performance reasons, since ORM and Services are on the same computer, using named pipes (or even local Windows Messages) instead of slower HTTP-TCP/IP is a good idea: in such case, ORM will access services via "Named Pipe server 2", whereas Services will serve their content to the ORM via "Named Pipe server 1".
Service Hosting on mORMot - one server, two instances
Of course, you can make any combination of the protocols and servers, to tune hosting for a particular purpose. You can even create several ORM servers or Services servers (grouped per features family or per product), which will cooperate for better scaling and performance.
If you consider implementing a stand-alone application for hosting your services, and has therefore basic ORM needs (e.g. you may need only CRUD statements for handling authentication), you may use the lighter TSQLRestServerFullMemory kind of server instead of a full TSQLRestServerDB, which will embed a SQLite3 database engine, perhaps not worth it in this case.
Your mORMot server may be able to publish some dynamic HTML pages, or simple generic JSON services, and then let the CDN do the caching. An expiration time out of 30 seconds, configured at CDN level, will definitively help your web application to scale to thousands of visitors.
Service Hosting on mORMot - Content Delivery Network (CDB)In practice, static content - see Returning file content - or some simple JSON requests - returned via Ctxt.Results() or an interface-based service - will benefit of using such a CDN.
When any client requests the mORMot server URI, it will be in fact redirected to the closest CDN node available. For instance, some client in Canada will be redirected to the "CDN US" server, or one mobile client in France will be redirected to the "CDN UK" server.
Then each CDN will check if the requested URI is already in its cache, according to its settings, and the expiration parameters which may be set within the HTTP headers of the cache header. If the resource is in local cache, it will be returned to the client immediately. If the resource is not in its cache, the CDN node will ask the mORMot server, cache the returned content, then return this content to the client. Any further attempt to this URI, compatible with the expiration parameters, won't trigger any request to the mORMot server.
Of course, you can define some URI patterns to never be cached, and point directly to the mORMot server. All authenticated services, for instance will need direct access to the mORMot server, since the below will append a session-private signature to each URI. Just ensure that you disabled authentication - using TSQLRestServer.ServiceMethodByPassAuthentication() for method-based services, or TServiceFactoryServer.ByPassAuthentication property for interface-based services. The per-session signature appended at each URI will indeed void any attempt of third-party cache.
If your project starts to have success, using a CDN is an easy and cheap way of increasing your number of clients. Your mORMot server will focus on its own purpose, which may be safe storage, authentication and high-level SOA, then let the remaining content be served by such a third-party caching system.
21. Security
Adopt a mORMotThe framework tries to implement security via:
Authorization of a given process is based on the group policy, after proper authentication:
Per-table access right functionalities built-in at lowest level of the framework;
Per-method execution policy for interface-based services;
General high-level security attributes, for SQL or Service remote execution.
Process safety has already been documented (see links above).
We will now give general information about both authentication and authorization in the framework.
21.1. Authentication
Extracted from Wikipedia:
Authentication (from Greek: "real" or "genuine", from "author") is the act of confirming the truth of an attribute of a datum or entity. This might involve confirming the identity of a person or software program, tracing the origins of an artifact, or ensuring that a product is what its packaging and labeling claims to be. Authentication often involves verifying the validity of at least one form of identification.
21.1.1. Principles
How to handle authentication in a RESTful Client-Server architecture is a matter of debate.
Commonly, it can be achieved, in the SOA over HTTP world via:
HTTP basic auth over HTTPS;
Cookies and session management;
Query Authentication with additional signature parameters.
We'll have to adapt, or even better mix those techniques, to match our framework architecture at best.
Each authentication scheme has its own PROs and CONs, depending on the purpose of your security policy and software architecture:
Criteria
HTTPS basic auth
Cookies+Session
Query Auth.
Browser integration
Native
Native
Via JavaScript
User Interaction
Rude
Custom
Custom
Web Service use (rough estimation)
95%
4%
1%
Session handling
Yes
Yes
No
Session managed by
Client
Server
N/A
Password on Server
Yes
Yes/No
N/A
Truly Stateless
Yes
No
Yes
Truly RESTful
No
No
Yes
HTTP-free
No
No
Yes
21.1.1.1. HTTP basic auth over HTTPS
This first solution, based on the standard HTTPS protocol, is used by most web services. It's easy to implement, available by default on all browsers, but has some known draw-backs, like the awful authentication window displayed on the Browser, which will persist (there is no LogOut-like feature here), some server-side additional CPU consumption, and the fact that the user-name and password are transmitted (over HTTPS) into the Server (it should be more secure to let the password stay only on the client side, during keyboard entry, and be stored as secure hash on the Server).
To be honest, a session managed on the Server is not truly Stateless. One possibility could be to maintain all data within the cookie content. And, by design, the cookie is handled on the Server side (Client in fact dont even try to interpret this cookie data: it just hands it back to the server on each successive request). But this cookie data is application state data, so the client should manage it, not the server, in a pure Stateless world.
The cookie technique itself is HTTP-linked, so it's not truly RESTful, which should be protocol-independent. Since our framework does not provide only HTTP protocol, but offers other ways of transmission, Cookies were left at the baker's home.
All REST queries must be authenticated by signing the query parameters sorted in lower-case, alphabetical order using the private credential as the signing token. Signing should occur before URI encoding the query string.
For instance, here is a generic URI sample from the link above:
GET /object?apiKey=Qwerty2010
should be transmitted as such:
GET /object?timestamp=1261496500&apiKey=Qwerty2010&signature=abcdef0123456789
The string being signed is "/object?apikey=Qwerty2010×tamp=1261496500" and the signature is the SHA256 hash of that string using the private component of the API key.
This technique is perhaps the more compatible with a Stateless architecture, and can also been implemented with a light session management.
Server-side data caching is always available. In our framework, we cache the responses at the SQL level, not at the URI level (thanks to our optimized implementation of GetJSONObjectAsSQL, the URI to SQL conversion is very fast). So adding this extra parameter doesn't break the cache mechanism.
21.1.2. Framework authentication
Even if, theoretically speaking, Query Authentication sounds to be the better for implementing a truly RESTful architecture, our framework tries to implement a Client-Server design.
In practice, we may consider two way of using it:
With no authentication nor user right management (e.g. for local access of data, or framework use over a secured network);
With per-user authentication and right management via defined security groups, and a per-query authentication, following several protocols (a set of mORMot flavors, Windows NTLM/Kerberos, or any custom scheme).
According to RESTful principle, handling per-session data is not to be implemented in such architecture. A minimal "session-like" feature was introduced only to handle user authentication with very low overhead on both Client and Server side. The main technique used for our security is therefore Query Authentication, i.e. a per-URI signature.
If the aHandleUserAuthentication parameter is left to its default false value for the TSQLRestServer. Create constructor, no authentication is performed. All tables will be accessible by any client, as stated in below. As stated above, for security reasons, the ability to execute INSERT / UPDATE / DELETE SQL statement via a RESTful POST command is never allowed by default with remote connections: only SELECT can be executed via this POST verb.
If authentication is enabled for the Client-Server process (i.e. if the aHandleUserAuthentication parameter is set to true at the TSQLRestServer instance construction), the following security features will be added:
On the Server side, a dedicated service, accessible via the ModelRoot/Auth URI is to be called to register an User, and create an in-memory session;
Client should open a session to access to the Server, and after authentication validation (e.g. via UserName / Password pair, or Windows credentials);
Each CRUD statement is checked against the authenticated User security group, via the AccessRights column and its GET / POST / PUT / DELETE per-table bit sets;
Thanks to Per-User authentication, any SQL statement commands may be available via the RESTful POST verb for an user with its AccessRights group field containing a reSQL flag in its AllowRemoteExecute;
Each REST request will expect an additional parameter, named session_signature, to every URL. Using the URI instead of cookies allows the signature process to work with all communication protocols, not only HTTP;
Of course, you have the opportunity to tune or even by-pass the security for a given service (method-based or interface-based), on need: e.g. to allow some methods only to your system administrators, or to serve public HTML content.
21.1.2.1. Per-User authentication
On the Server side, two tables, defined by the TSQLAuthGroup and TSQLAuthUser classes, will handle respectively per-group access rights (authorization), and user validation (authentication). In the database, they will be persisted as AuthGroup and AuthUser tables.
The Server will search for any class inheriting from TSQLAuthGroup and TSQLAuthUser in its Model. By default, you can use plain TSQLAuthGroup and TSQLAuthUser classes - and if none is defined in the model, and authentication is enabled, those mandatory classes will be added. But you can inherit from TSQLAuthGroup and TSQLAuthUser, and define e.g. your own fields, for any custom purpose at Group or User level. The exact class types are available from SQLAuthUserClass and SQLAuthGroupClass properties of TSQLRestServer.
Since the whole records will be loaded and persisted in memory at every authentication, do not store too much data in those tables: for instance, do not put historical data (like previous client activity), or huge BLOBs (like detailed pictures) - a dedicated table or set of tables will be a better idea.
Here is the layout of the default AuthGroup table, as defined by the TSQLAuthGroup class type:
AuthGroup Record LayoutThe AccessRights column is a textual CSV serialization of the TSQLAccessRights record content, as expected by the TSQLRestServer.URI method. Using a CSV serialization, instead of a binary serialization, will allow the change of the MAX_SQLTABLES constant value.
The AuthUser table, as defined by the TSQLAuthUser class type, is defined as such:
AuthUser Record LayoutEach user has therefore its own associated AuthGroup table, a name to be entered at login, a name to be displayed on screen or reports, and a SHA256 hash of its registered password (with optional PBKDF2_HMAC_SHA256 derivation). A custom Data BLOB field is specified for your own application use, but not accessed by the framework.
By default, the following security groups are created on a void database:
Group
POST SQL
SELECT SQL
Auth R
Auth W
Tables R
Tables W
Services
Admin
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Supervisor
No
Yes
Yes
No
Yes
Yes
Yes
User
No
No
No
No
Yes
Yes
Yes
Guest
No
No
No
No
Yes
No
No
'Admin' will be the only able to execute remote not SELECT SQL statements for POST commands (reSQL flag in TSQLAccessRights. AllowRemoteExecute) and modify the Auth* tables (i.e. AuthUser and AuthGroup). 'Admin' and 'Supervisor' will allow any SELECT SQL statements to be executed, even if the table can't be retrieved and checked (corresponding to the reSQLSelectWithoutTable flag). 'User' won't have the reSQLSelectWithoutTable flag, nor the right to retrieve the Auth* tables data for other users. 'Guest' won't have access to the interface-based remote JSON-RPC service (no reService flag), nor perform any modification to a table: in short, this is an ORM read-only limited user.
Please see below and the TSQLAccessRights documentation for all available options and use cases.
Then the corresponding 'Admin', 'Supervisor' and 'User' AuthUser accounts are created, with the default 'synopse' password.
You MUST override those default 'synopse' passwords for each AuthUser record to a custom genuine value.
A typical JSON representation of the default security user/group definitions are the following:
Of course, you can change AuthUser and AuthGroup table content, to match your security requirements, and application specifications. You can specify a per-table CRUD access, via the AccessRights column, as we stated above, speaking about the TSQLAccessRights record layout.
This will implement both Query Authentication together with a group-defined per-user right management.
21.1.2.2. Session handling
A dedicated RESTful service, available from the ModelRoot/Auth URI, is to be used for user authentication, handling so called sessions.
In mORMot, a very light in-memory set of sessions is implemented:
The unique ModelRoot/Auth URI end-point will create a session after proper authorization;
In-memory session allows very fast process and reactivity, on Server side;
Sessions could be optionally persisted on disk at server shutdown, to avoid breaking existing client connections;
An integersession identifier is used for all authorization process, independently from the underlying authentication scheme (i.e. mORMot is not tied to cookies, and its session process is much more generic).
Those sessions are in-memory TAuthSession class instances. Note that this class does not inherit from a TSQLRecord table so won't be remotely accessible, for performance and security reasons.
The server methods should not have to access those TAuthSession instances directly, but rely on the SessionID identifier. But you can still access the current session properties, e.g. the remote user, thanks to methods like TSQLRestServer.SessionGetUser(): TSQLAuthUser.
When the Client is about to close (typically in TSQLRestClientURI.Destroy), a GET ModelRoot/auth?UserName=...&Session=... request is sent to the remote server, in order to explicitly close the corresponding session in the server memory (avoiding most re-play attacks).
Note that each opened session has an internal TimeOut parameter (retrieved from the associated TSQLAuthGroup table content): after some time of inactivity, sessions are closed on the Server Side.
In addition, sessions are used to manage safe cross-client transactions:
When a transaction is initiated by a client, it will store the corresponding client Session ID, and use it to allow client-safe writing;
Any further write to the DB (Add/Update/Delete) will be accessible only from this Session ID, until the transaction is released (via commit or rollback);
If a transaction began and another client session try to write on the DB, it will wait until the current transaction is released - a timeout may occur if the server is not able to acquire the write status within some time;
This global write locking is defined by the TSQLRest.AcquireWriteMode / AcquireWriteTimeOut properties, and used on the Server-Side by TSQLRestServer.URI - you can change this behavior by setting e.g. AcquireWriteMode := amBackgroundThread which will lock any write process to be executed in a dedicated thread: this may be mandatory is your database client expects the transaction process to take place in the same thread (e.g. MS SQL);
If the server do not handle Session/Authentication, transactions can be unsafe, in a multi-client concurrent architecture.
You can specify an optional file name parameter to the TSQLRestServer.Shutdown() method, which will save the current server state into a local file. Then, if you restart the server in a short time, you may be able to restore all session information by using TSQLRestServer.SessionsLoadFromFile(). This feature will enable e.g. a quick and transparent ORM Server executable upgrade, in production. But note that even if sessions are persisted and able to be restored, any session-dependent complex data - like server-side temporary Instances life time implementation as generated by interface-based services - won't be available. This session temporary backup/restore will make sense only when the server is in ORM mode, not when used as SOA.
21.1.3. Authentication schemes
21.1.3.1. Class-driven authentication
Authentication is implemented in mORMot via the following classes:
TSQLRestServerAuthentication classes hierarchyIn fact, you can use one of the following RESTful authentication schemes:
HTTP Basic authentication Warning: password is not encrypted
All those classes will identify a TSQLAuthUser record from a user name. The associated TSQLAuthGroup is then used later for authorization.
You can add you own custom authentication scheme by defining your own class, inheriting from TSQLRestServerAuthentication.
By default, no authentication is performed.
If you set the aHandleUserAuthentication parameter to true when calling the constructor TSQLRestServer.Create(), both default secure mORMot authentication and Windows authentication are available. In fact, the constructor executes the following:
In order to define one or several authentication scheme, you can call the AuthenticationRegister() and AuthenticationUnregister() methods of TSQLRestServer.
MyClient.SetUser('User','synopse'); // default user for Security tests
Here are the typical steps to be followed in order to create a new user session via mORMot authentication scheme:
Client sends a GET ModelRoot/auth?UserName=... request to the remote server - with the above command, it will be GET ModelRoot/auth?UserName=User;
Server answers with a hexadecimal nonce contents (valid for about 5 minutes), encoded as JSON result object;
Client sends a GET ModelRoot/auth?UserName=...&PassWord=...&ClientNonce=... request to the remote server, in which ClientNonce is a random value used as Client nonce, and PassWord is computed from the log-on and password entered by the User, using both Server and Client nonce as salt;
Server checks that the transmitted password is valid, i.e. that its matches the hashed password stored in its database and a time-valid Server nonce - if the value is not correct, authentication fails;
On success, Server will create a new in-memory session and return the session number and a private key to be used during the session (encoded as JSON result object);
On any further access to the Server, a &session_signature= parameter is added to the URL, and will be checked against the valid sessions in order to validate the request.
Query Authentication is handled at the Client side in TSQLRestClientURI.SessionSign method, by computing the session_signature parameter for a given URL, according to the TSQLRestServerAuthentication class used.
In order to enhance security, the session_signature parameter will contain, encoded as 3 hexadecimal 32-bit cardinals:
The Session ID (to retrieve the private key used for the signature);
A Client Time Stamp (in 256 ms resolution) which must be greater or equal than the previous time stamp received;
The URI signature, using the session private key, the user hashed password, and the supplied Client Time Stamp as source for its crc32 hashing algorithm.
Such a classical 3 points signature will avoid most man-in-the-middle (MITM) or re-play attacks.
Here is typical signature to access the root URL
root?session_signature=0000004C000F6BE365D8D454
In this case, 0000004C is the Session ID, 000F6BE3 is the client time stamp (aka nonce), and 65D8D454 is the signature, computed by the following Delphi expression:
For better Server-side performance, the URI signature will use fast crc32 hashing method, and not the more secure (but much slower) SHA256. Since our security model is not officially validated as a standard method (there is no standard for per URI authentication of RESTful applications), the better security will be handled by encrypting the whole transmission channel, using standard HTTPS with certificates signed by a trusted CA, validated for both client and server side. The security involved by using crc32 will be enough for most common use. Note that the password hashing and the session opening will use SHA256 or PBKDF2_HMAC_SHA256, to enhance security with no performance penalty.
In our implementation, for better Server-side reaction, the session_signature parameter is appended at the end of the URI, and the URI parameters are not sorted alphabetically, as suggested by the reference article quoted above. This should not be a problem, either from a Delphi Client or from a AJAX / JavaScript client.
On practice, this scheme is secure and very fast, perfect for a Delphi client, or an AJAX application. If you expect a higher level of security for the URI signature, you may consider setting a cryptographic-level MD5/SHA1/SHA256/SHA512 hash, by selecting a given TSQLRestServerAuthenticationSignedURIAlgo on server side.
21.1.3.3. Authentication using Windows credentials
21.1.3.3.1. Windows Authentication
By default, the hash of the user password is stored safely on the server side. This may be an issue for corporate applications, since a new user name / password pair is to be defined by each client, which may be annoying.
Since revision 1.18 of the framework, mORMot is able to use Windows Authentication to identify any user. That is, the user does not need to enter any name nor password, but her/his Windows credentials, as entered at Windows session startup, will be used.
If the SSPIAUTH conditional is defined (which is the default), any call to TSQLRestClientURI.SetUser() method with a void aUserName parameter will try to use current logged name and password to perform a secure Client-Server authentication. It will in fact call the class function TSQLRestServerAuthenticationSSPI.ClientSetUser() method.
In this case, the aPassword parameter will identify if either NTLM or Kerberos authentication scheme is to be used: it may contain the SPN domain name to enabled Kerberos - see next section. This will be transparent to the framework, and a regular session will be created on success.
Only prerequisite is that the TSQLAuthUser table shall contain a corresponding entry, with its LogonName column equals to 'DomainNameUserName' value. This data row won't be created automatically, since it is up to the application to allow or disallow access from an authenticated user: you can be member of the domain, but not eligible to the application.
21.1.3.3.2. Using NTLM or Kerberos
Kerberos is the preferred authentication protocol for Windows Server 2003 and subsequent Active Directory domains.
Kerberos authentication offers the following advantages over NTLM authentication:
Mutual authentication. When a client uses the Kerberos protocol for authentication with a particular service on a particular server, Kerberos provides the client with an assurance that the service is not being impersonated by malicious code on the network.
Simplified trust management. Networks with multiple domains no longer require a complex set of explicit, point-to-point trust relationships.
Enhanced security. The old NTLM protocol suffers from several weaknesses, which have been fixed by Kerberos.
Performance. Offers improved performance, mostly for server applications.
Requirements for Kerberos authentication are the following:
Client and Server must join a domain, and the trusted third party must exist; if client and server are in different domain, these two domains must be configured as two-way trust.
SPN must have been registered properly. Service Principal Name (SPNs) are unique identifiers for services running on servers. Each service that will use Kerberos authentication needs to have an SPN set for it so that clients can identify the service on the network. It is registered in Active Directory under either a computer account or a user account. See below for corresponding instructions.
Typical use case of either Kerberos or NTLM are defined by the aPassword parameter:
Kerberos is used for a remote connection over a network and if aPassword is set to the expected SPN domain;
NTLM is used over network connection if aPassoword is empty;
NTLM is used when making local connection.
Note that Kerberos is used only when making remote connection over a network; NTLM is used when making local connection..
To enable Kerberos authentication in mORMot, you need to register SPN for your service.
The format of an SPN is ServiceClass/Host:Port/ServiceName. Typically, SPN for your service, developed with mORMot, looks like mymormotservice/myserver.mydomain.tld or http/myserver.mydomain.tld.
To list SPNs of a computer named MYSERVER, at the command prompt, type:
setspn -l myserver
Typically, you can see the following output:
Registered ServicePrincipalNames for CN=MYSERVER,OU=Computers,DC=domain,DC=tld:
HOST/MYSERVER.domain.tld
HOST/MYSERVER
If your service runs under SYSTEM or Network Service machine accounts, you can test Kerberos authentication by setting the aPassword parameter to value 'HOST/MYSERVER.domain.tld' in the client code and run the application.
To register SPN for your service, at the command prompt, type:
If your service run under SYSTEM or Network Service machine accounts:
setspn -a mymormotservice/myserver.mydomain.tld myserver
If your service run under another domain account:
setspn -a mymormotservice/myserver.mydomain.tld myserviceaccount
Membership in Domain Admins group, or equivalent, is the minimum required to complete this procedure.
After registration, you can connect to the server as such:
MyClient.SetUser('','mymormotservice/myserver.mydomain.tld'); // will use Kerberos
For good old NTLM, you can run:
MyClient.SetUser('',''); // will use NTLM
Or directly call the TSQLRestServerAuthenticationSSPI.ClientSetUser() method.
The authentication mode used will appear in the log file, if you define WITHLOG conditional when building the service application and if sllUserAuth is in TSQLLog.Family.Level set.
Messages will be as follows:
NTLM Authentication success for domain\myuser
Kerberos Authentication success for domain\myuser
The framework authorization will then be processed as usual, for all features like RESTful ORM process and remote services.
21.1.3.4. Weak authentication
The TSQLRestServerAuthenticationNone class can be used if you trust your client (e.g. via a https connection). It will implement a weak but simple authentication scheme.
Here are the typical steps to be followed in order to create a new user session via this authentication scheme:
Client sends a GET ModelRoot/auth?UserName=... request to the remote server;
Server checks that the transmitted user name is valid, i.e. that it is available in the TSQLAuthGroup table - if the value is not correct, authentication fails
On success, Server will create a new in-memory session and returns the associated session number (encoded as decimal in the JSON result object);
On any further access to the Server, a &session_signature= parameter is to be added to the URL with the correct session ID (encoded as hexadecimal), and will be checked against the valid sessions in order to validate the request.
For instance, a RESTful GET of the TSQLRecordPeople table with RowID=6 will have the following URI:
root/People/6?session_signature=0000004C
Here is some sample code about how to define this authentication scheme:
The Basic Authentication mechanism provides no confidentiality protection for the transmitted credentials. They are merely encoded with Base64 in transit, but not encrypted or hashed in any way. Basic Authentication is, therefore, typically used over HTTPS.
The TSQLRestServerAuthenticationHttpBasic class can be used to enable HTTP Basic authentication. This class is not to be used with a mORMot client, since TSQLRestServerAuthenticationDefault provides a much better scheme, both safer and faster, but could be used in conjunction with some browser clients, over HTTPS.
21.1.4. Clients authentication
21.1.4.1. Client interactivity
Note that with this design, it's up to the Client to react to an authentication error during any request, and ask again for the User pseudo and password at any time to create a new session. For multiple reasons (server restart, session timeout...) the session can be closed by the Server without previous notice.
Check(Client.SetUser('User','synopse')); // use default user
Then an event handled can be associated to the TSQLRestClientURI. OnAuthentificationFailed property, in order to ask the user to enter its login name and password:
Of course, if Windows Authentication is defined (see above), this event handler shall be adapted as expected. For instance, you may add a custom notification to register the corresponding user to the TSQLAuthUser table.
21.1.4.2. Authentication using AJAX
Smart Mobile Studio support can generate JavaScript code from its IDE. Our template-based code generation make this solution perfectly integrated with our mORMot server, especially about authentication: you will find the same TSQLRestServerAuthenticationDefault and TSQLRestServerAuthenticationNone classes in our SynCrossPlatformREST.pas unit, ready to authenticate to the server. In fact, there is also a command-line compiler available (named smsc.exe) which can create a .js file from SmartPascal code: you may use it to integrate the generated client to a regular HTML5 application (using e.g. JQuery or AngularJS).
Some stand-alone working JavaScript code has been published in our forum by a framework user (thanks, "RangerX"), which implements the authentication schema as detailed above. It uses jQuery, and HTML 5 LocalStorage, not cookies, for storing session information on the Client side. See https://synopse.info/forum/viewtopic.php?pid=2995#p2995
Any kind of JSON/HTTPS client could easily connect to such a service, by providing a valid JWT as 'Authorization: Bearer ####' HTTP header. A dedicated authentication service may be used to return some JWT in exchange from some credential (typically a user-name/password) for your application.
In practice, for your internal MicroServices communication, you could therefore use regular mORMot secure RESTful authentication over a WebSockets support, which are pretty stable and efficient. Then for a public API, you could use a regular TSQLHttpServer - see Network and Internet access via HTTP - perhaps over a nginx reverse proxy (e.g. for Let's Encrypt HTTPS certification). Since the mORMot authentication is proprietary, using a JWT may sound more natural for a public API service, with a more relaxed JSON encoding and no contract. That is, on the server side, you define: ServiceDefine(...).ResultAsJSONObjectWithoutResult := true; and on the client side, you call the TSQLRestClientURI.ServiceDefineSharedAPI() method to follow a similar more standard JSON layout (i.e. JSON objects as input/output and not a JSON array without any contract negotiation).
By authorization, we mean the action to define an access policy to the RESTful resources, for an authenticated user. Even if this user may be a a guest user (with no specific access credential), it should be identified as such, e.g. to serve public content.
The main principle is the principle of least privilege (also known as the principle of minimal privilege or the principle of least authority): in a particular abstraction layer of a computing environment, every module (such as a process, a user or a program depending on the subject) must be able to access only the information and resources that are necessary for its legitimate purpose.
It is most of the time implemented e.g. via Access Control Lists (ACL), set of capabilities or user groups. In mORMot, we defined user groups, associated to TSQLAuthGroup ORM class.
Today, authorization is part of a trust chain:
In corporate networks, the Active Directory service gives a token for an already signed user, or LDAP allows access to resources;
In social networks, protocols like OAuth allows to trust an user between services.
This allows the very convenient feature of single sign-on: the user can authenticate only once (e.g. at Windows logon), then he/she will be authenticated for its whole session, and each authorization will provide the appropriate rights. Our framework e.g. features NTLM / Kerberos authentication, as we just saw.
21.2.1. Per-table access rights
Even if authentication is disabled, a pointer to a TSQLAccessRights record, and its GET / POST / PUT / DELETE fields, is sent as a member of the parameter to the unique access point of the server class:
This will allow checking of access right for all CRUD operations, according to the table invoked. For instance, if the table TSQLRecordPeople has 2 as index in TSQLModel.Tables[], any incoming POST command for TSQLRecordPeople will be allowed only if the 2nd bit in RestAccessRights^.POST field is set, as such:
case URI.Method of
mPOST: begin// POST=ADD=INSERTif URI.Table=nilthenbegin
(...)
endelse// here, Table<>nil and TableIndex in [0..MAX_SQLTABLES-1]ifnot (URI.TableIndex in Call.RestAccessRights^.POST) then// check UserCall.OutStatus := HTTP_FORBIDDENelse
(...)
Making access rights a parameter allows this method to be handled as pure stateless, thread-safe and session-free, from the bottom-most level of the framework.
On the other hand, the security policy defined by this global parameter does not allow tuned per-user authorization. In the current implementation, the SUPERVISOR_ACCESS_RIGHTS constant is transmitted for all handled communication protocols (direct access, Windows Messages, named pipe or HTTP). Only direct access via TSQLRestClientDB will use FULL_ACCESS_RIGHTS, i.e. will have AllowRemoteExecute parameter set to all possible flags.
The light session process, as implemented by Authentication, is used to override the access rights with the one defined in the TSQLAuthGroup.AccessRights field.
Be aware that this per-table access rights depend on the table order as defined in the associated TSQLModel. So if you add some tables to your database model, please take care to add the new tables after the existing. If you insert the new tables within current tables, you will need to update the access rights values.
21.2.2. Additional safety
A AllowRemoteExecute: TSQLAllowRemoteExecute field has been made available in the TSQLAccessRights record to tune remote execution, depending on the authenticated user, and the group he/she is part of.
This field adds some flags to tune the security policy, for both SQL or SOA dimensions.
21.2.2.1. SQL remote execution
In our RESTful implementation, the POST command with no table associated in the URI allows to execute any SQL statement directly. A GET command could also be used, either with the SQL statement transmitted as body (which is convenient, but not supported by all HTTP clients, since it is not standard), or inlined at URI level.
These special commands should be carefully tested before execution, since SQL misuses could lead into major security issues. Such execution on any remote connection, if the SQL statement is not a SELECT, is unsafe. In fact, if it may affect the data content.
By default, for security reasons, the AllowRemoteExecute field value in SUPERVISOR_ACCESS_RIGHTS constant does not include the reSQL flag. It means that no remote call will be allowed but for safe read-only SELECT statements.
When SELECT statements are sent, the server will always check for the table name specified in their FROM clause. If there is a single table appearing, its security policy will be checked against the GET[] flags of the corresponding table. If the SELECT statement is more complex (e.g. is a JOINed statement), then the reSQLSelectWithoutTable will be checked to ensure that the user has the right to execute such statements.
Another possibility of SQL remote execution is to add a sql=.... inline parameter to a GET request (with optional paging). The reUrlEncodedSQL flag is used to enable or disable this feature.
Last but not least, a WhereClause=... inline parameter can be added to a DELETE request. The reUrlEncodedDelete option is used to enable or disable this feature.
You can change the default safe policy by including or excluding reSQL, reSQLSelectWithoutTable, reUrlEncodedSQL or reUrlEncodedDelete flags in the TSQLAuthGroup.AccessRights. AllowRemoteExecute field of an authentication user session.
If security is a real concern, you should enable mORMot secure RESTful authentication and URI signature on your server, so that only trusted clients may access to the server. This is the main security rule of the framework - in practice, those per table access right or SQL remote execution flags are more a design rule than a strong security feature. Since remote execution of any SQL statements can be unsafe, we recommend to write a dedicated server-side service (method-based or interface-based) to execute such statements instead, and disallow remote SQL execution; then clients can safely use those dedicated safe services, and/or ORM CRUD operations for simple data requests. It will also help your project to be not tied to SQL, so that a switch to a NoSQL persistence engine will still be possible, without changing the client code.
In addition to this global parameter, you can set per-service and per-method Security.
For Client-Server services via methods, if authentication is enabled, any method execution will be processed only from a signed URI. You can use TSQLRestServer.ServiceMethodByPassAuthentication() to disable the need of a signature for a given service method - e.g. it is the case for Auth and Timestamp standard method services.
Do not forget to remove authentication for the services for which you want to enable Scaling via CDN. In fact, such world-wide CDN caching services expect the URI to be generic, and not tied to a particular client session.
22. Scripting Engine
Adopt a mORMot
22.1. Scripting abilities
As a Delphi framework, mORMot premium language support is for the object pascal language. But it could be convenient to have some part of your software not fixed within the executable. In fact, once the application is compiled, execution flow is written in stone: you can't change it, unless you modify the Delphi source and compile it again. Since mORMot is Open Source, you can ship the whole source code to your customers or services with no restriction, and diffuse your own code as pre-compiled .dcu files, but your end-user will need to have a Delphi IDE installed (and paid), and know the Delphi language.
This is when scripting does come on the scene. For instance, scripting may allow to customize an application behavior for an end-user (i.e. for reporting), or let a domain expert define evolving appropriate business rules - following Domain-Driven Design.
If your business model is to publish a core domain expertise (e.g. accounting, peripheral driving, database model, domain objects, communication, AJAX clients...) among several clients, you will sooner or later need to adapt your application to one or several of your customers. There is no "one exe to rule them all". Maintaining several executables could become a "branch-hell". Scripting is welcome here: speed and memory critical functionality (in which mORMot excels) will be hard-coded within the main executable, then everything else could be defined in script.
There are plenty of script languages available. We considered http://code.google.com/p/dwscript which is well maintained and expressive (it is the code of our beloved Smart Mobile Studio), but is not very commonly used. We still want to include it in the close future. Then http://www.lua.org defines a light and versatile general-purpose language, dedicated to be embedded in any application. Sounds like a viable solution: if you can help with it, your contribution is welcome! We did also take into consideration http://www.python.org and http://www.ruby-lang.org but both are now far from light, and are not meant to be embedded, since they are general-purpose languages, with a huge set of full-featured packages.
Then, there is JavaScript:
This is the World Wide Web assembler. Every programmer in one way or another knows JavaScript;
JavaScript can be a very powerful language - see Crockford's book "JavaScript - The Good Parts";
There are a huge number of libraries written in JavaScript: template engines (jade, mustache...), SOAP and LDAP clients, and many others (including all node.js libraries of course);
It was the base for some strongly-typed syntax extensions, like CoffeScript, TypeScript, Dart;
In case of AJAX / Rich Internet Application we can directly share part of logic between client and server (validation, template rendering...) without any middle-ware;
One long time mORMot's user (Pavel, aka mpv) already integrated SpiderMonkey to mORMot's core. His solution is used on production to serve billion of requests per day, with success. We officially integrated his units. Thanks Pavel!
As a consequence, mORMot introduced direct JavaScript support via SpiderMonkey. It allows to:
Consume JavaScript code from Delphi (e.g. to define and customize any service or rule, or use some existing .js library);
Expose JavaScript objects and functions via a TSMVariant custom variant type: it allows to access any JavaScript object properties or call any of its functions via late-binding, from your Delphi code, just as if it was written in native Object-Pascal;
Follow a classic synchronous blocking pattern, rooted on mORMot's multi-thread efficient model, easy to write and maintain;
SpiderMonkey, the Mozilla JavaScript engine, can be embedded in your mORMot application. It could be used on client side, within a Delphi application (e.g. for reporting), but the main interest of it may be on the server side.
The word JavaScript may bring to mind features such as event handlers (like onclick), DOM objects, window.open, and XMLHttpRequest. But all of these features are actually not provided by the SpiderMonkey engine itself.
SpiderMonkey provides a few core JavaScript data typesnumbers, strings, Arrays, Objects, and so onand a few methods, such as Array.push. It also makes it easy for each application to expose some of its own objects and functions to JavaScript code. Browsers expose DOM objects. Your application will expose objects that are relevant for the kind of scripts you want to write. It is up to the application developer to decide what objects and methods are exposed to scripts.
22.2.2. Direct access to the SpiderMonkey API
The SynSMAPI.pas unit is a tuned conversion of the SpiderMonkey API, providing full ECMAScript 5 support and JIT. The SpiderMonkey revision 24 engine is included, with a custom C wrapper around the original C++ code. You could take a look at http://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey for a full description of this low-level API, and find our patched version of the library, modified to be published from C instead of C++, in the synsm-mozjs folder of the mORMot source code repository.
But the SynSM.pas unit will encapsulate most of it into higher level Delphi classes and structures (including a custom variant type), so you probably won't need to use SynSMAPI.pas directly in your code:
define a custom variant type, for direct access to any JavaScript object, with late-binding
We will see know how to work with all those classes.
22.2.3. Execution scheme
The SpiderMonkey JavaScript engine compiles and executes scripts containing JavaScript statements and functions. The engine handles memory allocation for the objects needed to execute scripts, and it cleans upgarbage collectsobjects it no longer needs.
In order to run any JavaScript code in SpiderMonkey, an application must have three key elements:
A JSRuntime, or runtime, is the space in which the JavaScript variables, objects, scripts, and contexts used by your application are allocated. Every JSContext and every object in an application lives within a JSRuntime. They cannot travel to other runtimes or be shared across runtimes.
A JSContext, or context, is like a little machine that can do many things involving JavaScript code and objects. It can compile and execute scripts, get and set object properties, call JavaScript functions, convert JavaScript data from one type to another, create objects, and so on.
Lastly, the globalJSObject is a JavaScript object which contains all the classes, functions, and variables that are available for JavaScript code to use. Whenever a web browser code does something like window.open("http://www.mozilla.org/"), it is accessing a global property, in this case window. SpiderMonkey applications have full control over what global properties scripts can see.
Every SpiderMonkey instance starts out every execution context by creating its JSRunTime, JSContext instances, and a global JSObject. It populates this global object with the standard JavaScript classes, like Array and Object. Then application initialization code will add whatever custom classes, functions, and variables (like window) the application wants to provide; it may be, for a mORMot server application, ORM access or SOA services consumption and/or implementation.
Each time the application runs a JavaScript script (using, for example, JS_EvaluateScript), it provides the global object for that script to use. As the script runs, it can create global functions and variables of its own. All of these functions, classes, and variables are stored as properties of the global object.
22.2.4. Creating your execution context
The main point about those three key elements is that, in the current implementation pattern of SpiderMonkey, runtime, context or global objects are not thread-safe.
Therefore, in the mORMot's use of this library, each thread will have its own instance of each.
In the SynSM.pas unit, a TSMEngine class has been defined to give access to all those linked elements:
TSMEngine = class
...
/// access to the associated global object as a TSMVariant custom variant// - allows direct property and method executions in Delphi code, via// late-bindingproperty Global: variantread FGlobal;
/// access to the associated global object as a TSMObject wrapper// - you can use it to register a methodproperty GlobalObject: TSMObjectread FGlobalObject;
/// access to the associated global object as low-level PJSObjectproperty GlobalObj: PJSObjectread FGlobalObject.fobj;
/// access to the associated execution contextproperty cx: PJSContextread fCx;
/// access to the associated execution runtimeproperty rt: PJSRuntimeread frt;
...
Our implementation will define one Runtime, one Context, and one global object per thread, i.e. one TSMEngine class instance per thread.
A JSRuntime, or runtime, is created for each TSMEngine instance. In practice, you won't need access to this value, but rely either on a JSContext or directly a TSMEngine.
A JSContext, or context, will be the main entry point of all SpiderMonkey API, which expect this context to be supplied as parameter. In mORMot, you can retrieve the running TSMEngine from its context by using the function TSMObject.Engine: TSMEngine - in fact, the engine instance is stored in the private data slot of each JSContext.
Lastly, the TSMEngine's global object contains all the classes, functions, and variables that are available for JavaScript code to use. For a mORMot server application, ORM access or SOA services consumption and/or implementation, as stated above.
You can note that there are several ways to access this global object instance, from high-level to low-level JavaScript object types. The TSMEngine.Global property above is in fact a variant. Our SynSM.pas unit defines in fact a custom variant type, identified as the TSMVariant class, able to access any JavaScript object via late-binding, for both variables and functions:
Most web applications only need one runtime, since they are running in a single thread - and (ab)use of callbacks for non-blocking execution. But in mORMot, you will have one TMSEngine instance per thread, using the TSMEngineManager.ThreadSafeEngine method. Then all execution may be blocking, without any noticeable performance issue, since the whole mORMot threading design was defined to maximize execution resources.
22.2.5. Blocking threading model
This threading model is the big difference with other server-side scripting implementation schemes, e.g. the well-known node.js solution.
Multi-threading is not evil, when properly used. And thanks to the mORMot's design, you won't be afraid of writing blocking JavaScript code, without any callbacks. In practice, those callbacks are what makes most JavaScript code difficult to maintain.
On the client side, i.e. in a web browser, the JavaScript engine only uses one thread per web page, then uses callbacks to defer execution of long-running methods (like a remote HTTP request). If fact, this is one well identified performance issue of modern AJAX applications. For instance, it is not possible to perform some intensive calculation in JavaScript, without breaking the web application responsiveness: you have to split your computation task in small tasks, then let the JavaScript code pause, until a next piece of computation could be triggerred... On server side, node.js allows to define Fibers and Futures - see http://github.com/laverdet/node-fibers - but this is not available on web clients. Some browsers did only start to uncouple the JavaScript execution thread from the HTML rendering thread - and even this is hard to implement... we reached here the limit of a technology rooted in the 80's...
On the server side, node.js did follow this pattern, which did make sense (it allows to share code with the client side, with some name-space tricks), but it is also a big waste of resources. Why should we stick to an implementation pattern inherited from the 80's computing model, when all CPUs were mono core, and threads were not available?
The main problem when working with one single thread, is that your code shall be asynchronous. Soon or later, you will face a syndrome known as "Callback Hell". In short, you are nesting anonymous functions, and define callbacks. The main issue, in addition to lower readability and being potentially sunk into function() nesting, is that you just lost the JavaScript exception model. In fact, each callback function has to explicitly check for the error (returned as a parameter in the callback function), and handle it.
Of course, you can use so-called Promises and some nice libraries - mainly async.js. But even those libraries add complexity, and make code more difficult to write. For instance, consider the following non-blocking/asynchronous code:
getTweetsFor("domenic") // promise-returning function
.then(function (tweets) {
var shortUrls = parseTweetsForUrls(tweets);
var mostRecentShortUrl = shortUrls[0];
return expandUrlUsingTwitterApi(mostRecentShortUrl); // promise-returning function
})
.then(httpGet) // promise-returning function
.then(
function (responseBody) {
console.log("Most recent link text:", responseBody);
},
function (error) {
console.error("Error with the twitterverse:", error);
}
);
This kind of code will be perfectly readable for a JavaScript daily user, or someone fluent with functional languages.
But the following blocking/synchronous code may sound much more familiar, safer and less verbose, to most Delphi / Java / C# programmer:
try {
var tweets = getTweetsFor("domenic"); // blockingvar shortUrls = parseTweetsForUrls(tweets);
var mostRecentShortUrl = shortUrls[0];
var responseBody = httpGet(expandUrlUsingTwitterApi(mostRecentShortUrl)); // blocking x 2
console.log("Most recent link text:", responseBody);
} catch (error) {
console.error("Error with the twitterverse: ", error);
}
Thanks to the blocking pattern, it becomes obvious that code readability and maintainability is as high as possible, and error detection is handled nicely via JavaScript exceptions, and a global try .. catch.
Last but not least, debugging blocking code is easy and straightforward, since the execution will be linear, following the code flow.
Upcoming ECMAScript 6 should go even further thanks to the yield keyword and some task generators - see http://taskjs.org - so that asynchronous code may become closer to the synchronous pattern. But even with yield, your code won't be as clean as with plain blocking style.
In mORMot, we did choose to follow an alternate path, i.e. write blocking synchronous code. Sample above shows how easier it is to work with. If you use it to define some huge business logic, or let a domain expert write the code, blocking syntax is much more straightforward.
Of course, mORMot allows you to use callbacks and functional programming pattern in your JavaScript code, if needed. But by default, you are allowed to write KISS blocking code.
22.3. Interaction with existing code
Within mORMot units, you can mix Delphi and JavaScript code by two ways:
Either define your own functions in Delphi code, and execute them from JavaScript;
Or define your own functions in JavaScript code (including any third-party library), and execute them from Delphi.
Like for other part of our framework, performance and integration has been tuned, to follow our KISS way.
You can take a look at "22 - JavaScript HTTPApi web server\JSHttpApiServer.dpr" sample for reference code.
22.3.1. Proper engine initialization
As was previously stated, the main point to interface the JavaScript engine is to register all methods when the TSMEngine instance is initialized.
For this, you set the corresponding OnNewEngine callback event to the main TSMEngineManager instance. See for instance, in the sample code:
In DoOnNewEngine, you will initialize every newly created TSMEngine instance, to register all needed Delphi methods and prepare access to JavaScript via the runtime's global JSObject.
Then each time you want to access the JavaScript engine, you will write for instance:
function TTestServer.Process(Ctxt: THttpServerRequest): cardinal;
var engine: TSMEngine;
...
engine := fSMManager.ThreadSafeEngine;
... // now you can use engine, e.g. engine.Global.someMethod()
Each thread of the HTTP server thread-pool will be initialized on the fly if needed, or the previously initialized instance will be quickly returned otherwise.
Once you have the TSMEngine instance corresponding to the current thread, you can launch actions on its global object, or tune its execution. For instance, it could be a good idea to check for the JavaScript VM's garbage collector:
function TTestServer.Process(Ctxt: THttpServerRequest): cardinal;
...
engine := fSMManager.ThreadSafeEngine;
engine.MaybeGarbageCollect; // perform garbage collection if needed
...
We will now find out how to interact between JavaScript and Delphi code.
22.3.2. Calling Delphi code from JavaScript
In order to call some Delphi method from JavaScript, you will have to register the method. As just stated, it is done by setting a callback within TSMEngineManager.OnNewEngine initialization code. For instance:
procedure TTestServer.DoOnNewEngine(const Engine: TSMEngine);
...
// add native function to the engine
Engine.RegisterMethod(Engine.GlobalObj,'loadFile',LoadFile,1);
end;
Here, the local LoadFile() method is implemented as such in native code:
function TTestServer.LoadFile(const This: variant; const Args: arrayofvariant): variant;
beginif length(Args)<>1 thenraise Exception.Create('Invalid number of args for loadFile(): required 1 (file path)');
result := AnyTextFileToSynUnicode(Args[0]);
end;
As you can see, this is perfectly easy to follow. Its purpose is to load a file content from JavaScript, by defining a new global function named loadFile(). Remember that the SpiderMonkey engine, by itself, does not know anything about file system, database or even DOM. Only basic objects were registered, like arrays. We have to explicitly register the functions needed by the JavaScript code.
In the above code snippet, we used the TSMEngineMethodEventVariant callback signature, marshalling variant values as parameters. This is the easiest method, with only a slight performance impact.
Such methods have the following features:
Arguments will be transmitted from JavaScript values as simple Delphi types (for numbers or text), or as our custom TSMVariant type for JavaScript objects, which allows late-binding;
The This: variant first parameter map the "callee" JavaScript object as a TSMVariant custom instance, so that you will be able to access the other object's methods or properties directly via late-binding;
You can benefit of the JavaScript feature of variable number of arguments when calling a function, since the input arguments is a dynamic array of variant;
All those registered methods are registered in a list maintained in the TSMEngine instance, so it could be pretty convenient to work with, in some cases;
You can still access to the low-level JSObject values of any the argument, if needed, since they can be trans-typed to a TSMVariantData instance (see below) - so you do not loose any information;
The Delphi native method will be protected by the mORMot wrapper, so that any exception raised within the process will be catch and transmitted as a JavaScript exception to the runtime;
There is also an hidden set of the FPU exception mask during execution of native code (more on it later on) - you should not bother on it here.
Now consider how you should have written the same loadFile() function via low-level API calls.
First, we register the callback:
procedure TTestServer.DoOnNewEngine(const Engine: TSMEngine);
...
// add native function to the engine
Engine.GlobalObject.DefineNativeMethod('loadFile', nsm_loadFile, 1);
end;
As you can see, this nsm_loadFile() function is much more difficult to follow:
Your code shall begin with a cryptic TSynFPUException.ForDelphiCode instruction, to protect the FPU exception flag during execution of native code (Delphi RTL expects its own set of FPU exception mask during execution, which does not match the FPU exception mask expected by SpiderMonkey);
You have to explicitly catch any Delphi exception which may raise, with a try...finally block, and marshal them back as JavaScript errors;
You need to do a lot of manual low-level conversions - via JS_ARGV() then e.g. JSVAL_TO_STRING() macros - to retrieve the actual values of the arguments;
And the returning function is to be marshaled by hand - see the JS_SET_RVAL() line.
Since the variant-based callback has only a slight performance impact (nothing measurable, when compared to the SpiderMonkey engine performance itself), and still have access to all the transmitted information, we strongly encourage you to use this safer and cleaner pattern, and do not define any native function via low-level API.
Note that there is an alternate JSON-based callback, which is not to be used in your end-user code, but will be used when marshalling to JSON is needed, e.g. when working with mORMot's ORM or SOA features.
22.3.3. TSMVariant custom type
As stated above, the SynSM.pas unit defines a TSMVariant custom variant type. It will be used by the unit to marshal any JSObject instance as variant.
Via the magic of late-binding, it will allow access of any JavaScript object property, or execute any of its functions. Only with a slightly performance penalty, but with much better code readability than with low-level access of the SpiderMonkey API.
The TSMVariantData memory structure can be used to map such a TSMVariant variant instance. In fact, the custom variant type will store not only the JSObject value, but also its execution context - i.e. JSContext - so is pretty convenient to work with.
For instance, you may be able to write code as such:
function TMyClass.MyFunction(const This: variant; const Args: arrayofvariant): variant;
var global: variant;
beginTSMVariantData(This).GetGlobal(global);
global.anotherFunction(Args[0],Args[1],'test');
// same as:
global := TSMVariantData(This).SMObject.Engine.Global;
global.anotherFunction(Args[0],Args[1],'test');
// but you may also write directly:withTSMVariantData(This).SMObject.Engine do
Global.anotherFunction(Args[0],Args[1],'test');
result := AnyTextFileToSynUnicode(Args[0]);
end;
Here, the This custom variant instance is trans-typed via TSMVariantData(This) to access its internal properties.
22.3.4. Calling JavaScript code from Delphi
In order to execute some JavaScript code from Delphi, you should first define the JavaScript functions to be executed. This shall take place within TSMEngineManager.OnNewEngine initialization code:
procedure TTestServer.DoOnNewEngine(const Engine: TSMEngine);
var showDownRunner: SynUnicode;
begin// add external JavaScript library to engine (port of the Markdown library)
Engine.Evaluate(fShowDownLib, 'showdown.js');
// add the bootstrap function calling loadfile() then showdown's makeHtml()
showDownRunner := AnyTextFileToSynUnicode(ExeVersion.ProgramFilePath+'showDownRunner.js');
Engine.Evaluate(showDownRunner, 'showDownRunner.js');
...
This code first evaluates (i.e. "executes") a general-purpose JavaScript library contained in the showdown.js file, available in the sample executable folder. This is an open source library able to convert any Markdown markup into HTML. Plain standard JavaScript code.
Then we evaluate (i.e. "execute") a small piece of JavaScript code, to link the makeHtml() function of the just defined library with our loadFile() native function:
function showDownRunner(pathToFile){
var src = loadFile(pathToFile); // call Delphi native codevar converter = new Showdown.converter(); // get the Showdown convertedreturn converter.makeHtml(src); // convert .md content into HTML via showdown.js
}
Now we have a new global function showDownRunner(pathToFile) at hand, ready to be executed by our Delphi code:
function TTestServer.Process(Ctxt: THttpServerRequest): cardinal;
var content: variant;
FileName, FileExt: TFileName;
engine: TSMEngine;
...
if FileExt='.md' thenbegin
...
engine := fSMManager.ThreadSafeEngine;
...
content := engine.Global.showDownRunner(FileName);
...
As you can see, we access the function via late-binding. Above code is perfectly readable, and we call here a JavaScript function and a whole library as natural as if it was native code.
Without late-binding, we may have written, accessing not the Global TSMVariant instance, but the lower level GlobalObject: TSMObject property:
It is up to you to choose which kind of code you prefer, but late-binding is worth considering.
23. Asymmetric Encryption
Adopt a mORMotAs we have seen when dealing about Security, the framework offers built-in encryption of the content transmitted between its REST client and server sides, especially via Custom Encodings, or HTTPS. The later, when using TLS 1.2 and proven patterns, implements state-of-the-art security. But default mORMot encryption, even if using proven algorithms like AES256-CFB and SHA256, uses symmetric keys, that is the same secret key is shared on both client and server sides.
Asymmetric encryption, also known as public-key cryptography, uses pairs of keys:
Public keys that may be disseminated widely;
Paired with private keys which are known only to the owner.
The framework features a full asymmetric encryption system, based on Elliptic curve cryptography (ECC), which may be used at application level (i.e. to protect your application data), or at transmission level (to enhance communication safety).
23.1. Public-key Cryptography
Once you have generated a public/private pairs of keys, you can perform two functions:
Authenticate a message originated with a holder of the private key; a certification system should be used to maintain a trust chain of authority;
Encrypt a message with a public key to ensure that only the holder of the paired private key can decrypt it.
23.1.1. Keys Generation and Distribution
First process is to generate a pair of public/private keys. Some random number generator, probably based on an external entropy source, will gather unpredictable numbers, which will be consumed by a public-key algorithm to generate the actual set of keys. This step usually requires some computing powers, due to the complexity of the algorithms involved, and the encryption needed for storing the private key in secret.
Let's explain how it works for the classic Alice/Bob scheme:
Asymmetric Key GenerationNow we have two pairs of keys:
alice.public and alice.private for Alice;
bob.public and bob.private for Bob.
By design, public keys (alice.public and bob.public) can be published, via mail, in application settings, as unprotected file, or even on a public server. On the contrary, private keys (alice.private or bob.private) should remain as secret as possible, and are usually encrypted, then stored in password-protected files, in some safe place of the operating system, or even dedicated hardware.
In practice, Alice will send her alice.public key to Bob, so that:
Bob can verify the digital signature of a message sent by Alice, who signed it with her alice.private key;
Bob can encrypt some information with the known alice.public key, then send it to Alice - and that only Alice could decrypt it using her alice.private key.
Of course, since Bob has his own set of keys, he also publishes his bob.public key, so that:
Alice can verify the digital signature of a message sent by Blob, who signed it with his bob.private key;
Alice can encrypt some information with the known bob.public key, then send it to Bob - and that only Bob could decrypt it using his bob.private key.
Key distribution is an important part of any asymmetric encryption scheme. The whole security chain is as secure as its weakest link, so the secrecy of the private keys requires as most attention as possible. Every software solution using security will probably require external audits, at least peer review, to validate each implementation.
23.1.2. Message Authentication
Any kind of message (most probably a file or a memory buffer) can be authenticated using digital signatures, using the private key of the sender. Then, on the other side, the receiver can verify the message signature, using the public key of the sender.
Asymmetric Digital SignatureAs you can see, if Bob believes that the alice.public file comes from Alice, he can assume that the messate.txt content has really been sent by Alice. Most of the time, in such simple scenarios, Alice probably gave directly her alice.public file to Bob, for instance via an email. But for most complex scenarios, like the Client/Server solutions which can be built using the mORMot framework, the multiplicity of nodes, and therefore keys, induces a potential risk.
23.1.3. Certificates and Public Key Infrastructure
A central problem with the use of public-key cryptography is confidence that a particular public key is authentic, in that it belongs to the person or entity claimed, and has not been tampered with or replaced by a malicious third party. Digital signature is more than just creating a hash of some content, or applying some kind of "seal" on it: validation should be done against some reference public keys, which are hosted into a public-key infrastructure (PKI). One or more third-parties, aka certification authorities (CA), certify ownership of key pairs, by supplying some online service and/or local safe storage of reference public keys, but keeping their own private keys secret. Any certificate authority could sign a message with his private key, or even delegate his own authority to another certification instance, by signing the intermediate authority with his private key. If one certificate is compromised - i.e. if its private key has been released - the whole chain of trust is broken, and all dependent certificates should be immediately revoked.
In practice, when a public certificate key is generated in such a trusted PKI system, it will contain:
The genuine public key material, depending on the underlying algorithm used;
Some ownership information (i.e. who emitted it);
The scope of the certification (may apply to a user, a company, a web site, an application...);
A certified link to one or several other certificates, signed with their private key to prove their authenticity using the known public key of the CA chain;
Optional validity and revocation dates - since it is a good practice to renew certificates on a regular basis.
The private key store may also contain the very same set of information, added to its private key material. It will enforce consistency between public and private keys - for instance, you won't be fooled by using a private key after its associated public certificates expired.
Certification authorities create a chain of trust, used as reference to authenticate public keys. Every Operating System, or Internet browser do contain some root certificates, and the whole Internet security (HTTPS/TLS) is governed by such a PKI. Of course, for your own set of applications or products, you can create your own key chain, keeping the same principles - mainly private key secrecy and trust chain management.
23.1.4. Message encryption
A naive approach of hiding a message content is by using a secret key or pattern, then apply it on the message. It has been done since ages, and will be safe as much as the symmetric key is safe. As a side effect, you have to trust the receiver not to spread the key to the public - and in fact, you shouldn't: don't trust anyone, even not you!
Public-key cryptography solves this problem by using a public key to encrypt a message, which will therefore only be decryptable by someone knowing the corresponding secret key.
Asymmetric Encryption SchemeOf course, you can not only encrypt the message, but also sign it, using the other end public key. Here is how a sign-then-encrypt pattern can be implemented:
Asymmetric Sign-Then-Encrypt SchemeAs always, the alice.public and bob.public keys are validated against the trust chain of certificates of a public-key infrastructure (PKI).
With all this elements, we can now apply them to our mORMot applications.
23.2. Elliptic Curve Cryptography
The framework features an implementation of Elliptic Curve Cryptography (ECC), based on the mathematical structure of the "elliptic curve discrete logarithm problem". The mathematical community has not made any major progress in improving algorithms to solve this problem since it was independently introduced by Koblitz and Miller in 1985. In short, the public key is an equation for an elliptic curve and a point that lies on that curve. The private key is a number. Thanks to the symmetry of the elliptic curve, there is some kind of symmetry also between ECC public and private key values, and ECDSA and ECDH algorithms capitalize on this characteristic to compute a digital signature or a shared secret.
In comparison to the RSA algorithm, ECC has some advantages:
Smaller key size, for the same level of safety (a 256-bit elliptic curve key is comparable to a 3072-bit RSA key);
Well endorsed by most certification authorities (NIST/NSA);
Faster performance, especially when the key size increases;
Offers perfect forward secrecy, since a fresh key is created for every encryption;
Potentially less patents infringement, in all its practical appliances;
Last but not least, it is one the strongest algorithms for the future of web.
There will no doubt be criticism of our decision to re-implement a whole public-key cryptography stack from scratch, with its own small choice of algorithms, instead of using an existing library (like OpenSSL), and established standards (like X509). To be fair, such libraries are complex and confusing, whereas we selected a set of future-proof algorithms (AES256 excluding ECB, HMAC-SHA256, PBKDF2_HMAC_SHA256, ECDSA, ECIES...) to follow mORMot's KISS and DRY principles, keep code maintainable and readable, and reduce risk assessment scope. We followed all identified best practices, and tried to avoid, from the beginning, buffer overflows, weak protocols, low entropy, low default values, serial collision, forensic vulnerabilities, hidden memory copies, evil optimizations. The last thing we want to do is to start mandating DLLs, which are perhaps deprecated/unsafe if part of the OS. Last but not least, it was fun, we learned a lot, and we hope you will enjoy using it, and contribute to it!
23.2.1. Introducing SynEcc
The mORMot's SynEcc.pas unit implements full ECC computation, using secp256r1 curve, i.e. NIST P-256, or OpenSSL's prime256v1. The low-level computation is done in optimized C code - from the https://github.com/esxgx/easy-eccOpen Source project - and is statically linked in your Windows or Linux executable: i.e. no external .dll/.so library is needed. On targets (e.g. BSD/MacOSX or ARM) where we didn't provide the static .o files, there is an optimized pascal version available. Then we defined a feature-rich set of object pascal classes on top of this solid ECC ground, to include certificates, safe storage of private keys, JSON publication of public keys, as an integrated toolset.
All needed low-level asymmetric cryptography is available:
Innovative .cheat files generation, for safe storage of private keys passwords, encrypted from a master cheat.public key and its master password.
You are free to use those classes, in your programs, whenever some advanced cryptography is needed - and it will eventually be the case, trust me! A command-line ECC tool has also been developed, for convenient operation on files.
23.2.2. ECC command line tool
You will find in the SQLite3\Samples\33 - ECC folder the source code of the ECC.dpr console project. Just compile it into an executable, accessible from your command line prompt. Or download an already compiled version from https://synopse.info/files/ecc.7z
It works with no problem under Windows, or Linux, with no external dependency (e.g. no OpenSSL needed), so could be used in an automated server infrastructure. No need to deploy a complex PKI system, just manage your certificates, encryption and signature details, via a single command line tool.
If you run it without argument, you will get simple help information (here is the list at the time of this writing, your own version may differ):
As you can see, the action is defined by a keyword, at first place (new sign verify source...). Then some optional parameters, in form of -key value pairs, can be supplied. If no parameter is specified, the ECC console application will prompt for input, with user-friendly questions, and adequate default values.
You can define the -noprompt switch to force no console interaction at all, therefore allowing automated use from another process, or batch file. The ECCProcess.pas unit publishes all high-level commands of the ECC tool, so could be reused in your own setup or maintenance projects.
We will now use this ECC tool to show most common features of the SynEcc unit, but also showing the code corresponding to each action.
23.2.3. Keys and Certificates Generation
The first step is to create a new key pair, which will contain their own certification information:
>ecc new
Enter the first chars of the .private file name of the signing authority.
Will create a self-signed certificate if left void.
Auth:
Enter Issuer identifier text.
Will be truncated to 15-20 ascii-7 chars.
Issuer [arbou] :
Enter the YYYY-MM-DD start date of its validity.
0 will create a never-expiring certificate
Start [2016-09-23] :
Enter the number of days of its validity.
Days [365] :
Enter a private PassPhrase for the new key (at least 8 chars long).
Save this in a safe place: if you forget it, the key will be useless!
NewPass [#weLHn5E.Qfe] :
Enter the PassPhrase iteration round for the new key (at least 1000).
The higher, the safer, but will demand more computation time.
NewRounds [60000] :
Corresponding TSynPersistentWithPassword.ComputePassword:
encryption ErHdwwro/8jFsCZC
authMutual 5qMgx6Miv+O71+VYL95zk6U2wP79lKL3s1BFnd+a
authServer 5qMhx6Miv+O71+VYL95zk6U2wP79lKL3s1BFnd+a
authClient 5qMix6Miv+O71+VYL95zk6U2wP79lKL3s1BFnd+a
8BC90201EF55EE34F62DBA8FE8CF14DC.public/.private file created.
Here we keep the default values, including the safe generated password (#weLHn5E.Qfe). You should write down this password in a safe place, because it will be required for any use of the private key, e.g. when signing or decrypting a message. If you forget about this password, there will be no way of accessing this private key any more - you have been warned! We will see below how enabling the ECC cheat mode may help storing the generated .private key passwords in a .cheat encrypted local file using a cheat.public key, to safely recover a password, from a master cheat.private key and its associated password.
The last line contains the identifier (or serial) of the generated key. This hexadecimal value (8BC90201EF55EE34F62DBA8FE8CF14DC) will be used externally to identify the key, and internally (within other certificates) to map this particular key. Note that you do not need to type all the characters of the serial in the ECC tool: only the first characters are enough (e.g. 8BC9), as soon as they identify one unique file in the current folder.
You can check the generated files in the current folder:
The .private is some raw binary content, encrypted using the #weLHn5E.Qfe password. The .public file, on the contrary, is stored as a plain JSON object:
You can see all information stored in a TECCCertificate instance. The "Base64" field is in fact a raw serialization of the whole content, so its string value contains all information of a public certificate, e.g. in application settings.
We did not specify any authority at the first Auth: prompt. As a result, this key pair will be a self-signed certificate - see the "IsSelfSigned": true field in the above JSON, and that "Serial" and "AuthoritySerial" identifiers do match. We will use it as root certificate to create a certificate chain.
All further certificates will eventually be signed by this root authority. For instance:
>ecc new
Enter the first chars of the .private file name of the signing authority.
Will create a self-signed certificate if left void.
Auth: 8
Will use: 8BC90201EF55EE34F62DBA8FE8CF14DC.private
Enter the PassPhrase of this .private file.
AuthPass: #weLHn5E.Qfe
Enter the PassPhrase iteration rounds of this .private file.
AuthRounds [60000] :
Enter Issuer identifier text.
Will be truncated to 15-20 ascii-7 chars.
Issuer [arbou] : toto
Enter the YYYY-MM-DD start date of its validity.
0 will create a never-expiring certificate.
Start [2016-09-23] : 0
Enter a private PassPhrase for the new key (at least 8 chars long).
Save this in a safe place: if you forget it, the key will be useless!
NewPass [b3dEB+DW8BJd] :
Corresponding TSynPersistentWithPassword.ComputePassword:
cIK5hkjDu5/98mwm
Enter the PassPhrase iteration round for the new key (at least 1000).
The higher, the safer, but will demand more computation time.
NewRounds [60000] :
03B8865C6B982A39E9EFB1DC1A95D227.public/.private file created.
As you can see, we entered just 8 for the first Auth: prompt, and the tool identified the single 8*.private file in the current folder. Then we entered its associated #weLHn5E.Qfe password - any wrong password would have broken the generation. This authority will never expire by itself (we entered 0 as Start: prompt) - but since its root certificate has an expiration date, it will expire when the root expires.
You can recognize the expected values of "Serial", "AuthoritySerial" and "IsSelfSigned" fields.
We could create a certificates chain of all available keys in the current folder, by running:
>ecc chainall
chain.ca file created.
The chain.ca file is a JSON object, containing all public information of the whole certificates chain, with the "PublicBase64" JSON array ready to be copied and pasted in your applications settings or source, then used via the TECCCertificateChain class:
In the above sample, we cut down the "PublicBase64" values, to save some paper and trees. They map the content already shown in the .public JSON files. In fact, the same information is stored two times: once in "PublicBase64", and another time in each individual properties ("Version", "Serial", "Issuer"...) of the "Items" items.
An easy way of keys management is to keep a safe mean of storage (e.g. a pair of USB pen-drives, with at least one kept in a physical vault), then put all your certificate chains in dedicated folders. All public keys - i.e. *.public and chain.ca files - are meant to be public, so could be spread away everywhere. Just keep an eye on your .private files, and their associated passwords. A hardware-secured drive may be an overkill, since the .private files are already encrypted and password-protected with state-of-the-art software protection, i.e. AFSplit anti-forensic diffusion and AES256-CFB encryption on a PBKDF2_HMAC_SHA256 derived password, with a huge number of rounds (60000).
Remember that often, the weakest link of the security chain is between the chair and the keyboard, not within the computer. Do not reuse passwords between keys, and remember you have a "rekey" command available on the ECC tool, so that you can change a private key password, without changing its content, nor re-publish its associated .public key:
>ecc rekey
Enter the first chars of the .private certificate file name.
Auth: 8
Will use: 8BC90201EF55EE34F62DBA8FE8CF14DC.private
Enter the PassPhrase of this .private file.
AuthPass: #weLHn5E.Qfe
Enter the PassPhrase iteration rounds of this .private file.
AuthRounds [60000] :
Enter a NEW private PassPhrase for the key (at least 8 chars long).
Save this in a safe place: if you forget it, the key will be useless!
NewPass [mPy3kjWHE@LK] :
Corresponding TSynPersistentWithPassword.ComputePassword:
f+Gk8GGCqICA8GoJ
Enter the NEW PassPhrase iteration round for the key (at least 1000).
The higher, the safer, but will demand more computation time.
NewRounds [60000] :
8BC90201EF55EE34F62DBA8FE8CF14DC.private file created.
From now on, the root certificate will expect mPy3kjWHE@LK as keyphrase, for accessing its .private content. For instance (using only command line switches including the -noprompt option), you can now write:
Here, the "Base64": field only contains the public key information, not the private key content, which is kept secret and never serialized as JSON.
23.2.4. TECCCertificate and TECCCertificateSecret
As reference, here is how creating a new certificate is implemented in the ECC tool, and its .private/.public files generated, using TECCCertificateSecret class:
function ECCCommandNew(const AuthPrivKey: TFileName;
const AuthPassword: RawUTF8; AuthPasswordRounds: integer;
const Issuer: RawUTF8; StartDate: TDateTime; ExpirationDays: integer;
const SavePassword: RawUTF8; SavePassordRounds, SplitFiles: integer): TFileName;
var auth,new: TECCCertificateSecret;
beginif AuthPrivKey='' then
auth := nilelse
auth := TECCCertificateSecret.CreateFromSecureFile(AuthPrivKey,AuthPassword,AuthPasswordRounds);
try// generate pair
new := TECCCertificateSecret.CreateNew(auth,Issuer,ExpirationDays,StartDate);
try// save private key as .private password-protected binary file
new.SaveToSecureFiles(SavePassword,'.',SplitFiles,64,SavePassordRounds);
// save public key as .public JSON file
result := ChangeFileExt(new.SaveToSecureFileName,ECCCERTIFICATEPUBLIC_FILEEXT);
ObjectToJSONFile(new,result);
finally
new.Free;
end;
finally
auth.Free;
end;
end;
See the SynEcc.pas unit API reference, especially the TECCCertificateChain and TECCCertificateChainFile classes, which allow to store a certificate chain as JSON files or as a JSON array of base-64 encoded strings in your settings, using these constructors:
You can use the "source" command of the ECC tool to generate some pascal constant source code, containing an encrypted private key, ready to be embedded to your executable. For instance:
>ecc sign -file test1.txt
Enter the first chars of the .private file name of the signing authority.
Auth: 8B
Will use: 8BC90201EF55EE34F62DBA8FE8CF14DC.private
Enter the PassPhrase of this .private file.
Pass: mPy3kjWHE@LK
Enter the PassPhrase iteration rounds of this .private file.
Rounds [60000] :
test1.txt.sign file created.
In additional to some general information (name, date, size), you have unsigned hashes ("md5" and "sha256"), and an ECC digital signature, stored as a base-64 encoded string in the "sign": field. This signature has been computed using the 8BC90201EF55EE34F62DBA8FE8CF14DC.private key, and the SHA256 hash of the test1.txt file content. Note that you can add whatever JSON field you need to any .sign file, especially in the "meta": nested object, as soon as you don't modify the size/md5/sha256/sign values.
To verify the file, ensure that both test1.txt and test1.txt.sign files are in the current directory, then run:
Since the 8BC90201EF55EE34F62DBA8FE8CF14DC.private key has been signed using itself as authority, it is reported as "valid self signed". A signature verified against a certificate itself issued from another authority would have returned "valid signed".
Now if you modify test1.txt, e.g. changing one character, the verification will fail:
From the source code point of view, you can easily add asymmetric digital signatures in your project using the TECCCertificateSecret.SignFile method, or working with memory buffer instead of files thanks to TECCCertificateSecret.SignToBase64 overloaded methods.
As reference, here is how the signing is implemented in the ECC tool:
function ECCCommandSignFile(const FileToSign, AuthPrivKey: TFileName;
const AuthPassword: RawUTF8; AuthPasswordRounds: integer): TFileName;
var auth: TECCCertificateSecret;
begin
auth := TECCCertificateSecret.CreateFromSecureFile(AuthPrivKey,AuthPassword,AuthPasswordRounds);
try
result := auth.SignFile(FileToSign,[]);
finally
auth.Free;
end;
end;
function ECCCommandVerifyFile(const FileToVerify, AuthPubKey: TFileName;
const AuthBase64: RawUTF8): TECCValidity;
var content: RawByteString;
auth: TECCCertificate;
cert: TECCSignatureCertified;
begin
content := StringFromFile(FileToVerify);
if content='' thenraiseEECCException.CreateUTF8('File not found: %',[FileToVerify]);
cert := TECCSignatureCertified.CreateFromFile(FileToVerify);
tryifnot cert.Check thenbegin
result := ecvInvalidSignature;
exit;
end;
auth := TECCCertificate.Create;
tryif auth.FromAuth(AuthPubKey,AuthBase64,cert.AuthoritySerial) then
result := cert.Verify(auth,pointer(content),length(content)) else
result := ecvUnknownAuthority;
finally
auth.Free;
end;
finally
cert.Free;
end;
end;
Here, the signing authority is supplied as a single .public local file, loaded in a TECCCertificate instance, but your projects may use TECCCertificateChain for a full PKI authority chain.
23.2.7. File Encryption
In order to encrypt out both test files, as proposed in Asymmetric Encryption Scheme, we will run the following commands:
You may notice that the .synecc files are smaller than the original .txt files... in fact, SynEcc did recognize that the plain content was easily compressible, then applied SynLZ compression on it, before the encryption step.
If we ask for information about the test1.txt.synecc file:
We can see the information stored in the file header, including the recipient name and .publickey identifier, and also the "PBKDF2_HMAC_SHA256_AES256_CFB_SYNLZ" algorithm, which indeed includes _SYNLZ compression. Other algorithms are available (with diverse AES chaining modes), and some new methods may be added in the future.
The ecc crypt command did also include the digital signature available in the test1.txt.sign file in the current folder - so was in fact following Asymmetric Sign-Then-Encrypt Scheme - whereas test2.txt.synecc does not have any embedded signature, since there was no test2.txt.sign file available at encryption time:
As you can see, encryption is defined by its "Algorithm": field, and uses two additional properties:
"RandomPublicKey" which contains a genuine key generated by ecc crypt, allowing perfect forward secrecy, meaning that a shared secret key is computed for every encryption: if someone achieves to break the AES256-CFB secret key used to encrypt a particular .synecc file (e.g. spending lots of money in brute force search), this secret key won't be reusable for any other file: each "RandomPublicKey" value above is indeed unique for each .synecc file;
"HMAC": which uses a safe way of message authentication - known as keyed-hash message authentication code (HMAC) - stronger than the hashing algorithm it is based on, i.e. SHA256 in our case.
In practice, SynEcc implements state-of-the-art Elliptic Curve Integrated Encryption Scheme (ECIES) using PBKDF2_HMAC_SHA256 as key derivation function, AES256-CFB as symmetric encryption scheme, and HMAC-SHA256 algorithm for message authentication. See https://en.wikipedia.org/wiki/Integrated_Encryption_Scheme
ECIES provides semantic security against an adversary who is allowed to use chosen-plaintext and chosen-ciphertext attacks. In addition to the expected genuine secret and message authentication in "RandomPublicKey" and "HMAC" properties, SynEcc implementation allows to customize the default "salt" value, to add a password protection for each .synecc encrypted file.
Decryption is pretty straightforward:
>ecc decrypt -file test1.txt.synecc
Enter the name of the decrypted file
Out [test1.txt.2] :
Enter the PassPhrase of the associated .private file.
AuthPass: b3dEB+DW8BJd
Enter the PassPhrase iteration rounds of this .private file.
AuthRounds [60000] :
Enter the optional PassPhrase to be used for decryption.
SaltPass [salt] : monsecret
Enter the PassPhrase iteration rounds.
SaltRounds [60000] :
test1.txt.2 file verified as valid self signed.
test1.txt.synecc file decrypted with signature.
test1.txt.2 file created.
To decrypt the second file in a single step, and no console interaction:
The *.2 decrypted files have the expect size (and content), after decompression. Even the file timestamp has been set to match the original.
23.2.8. Private Keys Passwords Cheat Mode
In order to follow best practice, our .private key files are always protected by a password. A random value with enough length and entropy is always proposed by the ECC tool when a key pair is generated, and could be used directly. It is always preferred to trust a computer to create true randomness (and SynCrypto.pas's secure TAESPRNG was designed to be the best possible seed, using hardware entropy if available), than using our human brain, which could be defeated by dictionary-based password attacks. Brute force cracking would be almost impossible, since PBKDF2_HMAC_SHA256 Password-Based Key Derivation Function with 60,000 rounds is used, so rainbow tables (i.e. pre-computed passwords list) will be inoperative, and each password trial would take more time than with a regular Key Derivation Function.
The issue with strong passwords is that they are difficult to remember. If you use not pure random passwords, but some easier to remember values with good entropy, you may try some tools like https://xkpasswd.net/s which returns values like $$19*wrong*DRIVE*read*61$$. But even then, you will be able to remember only a dozen of such passwords. In a typical public key infrastructure, you may create hundredths of keys, so remembering all passwords is no option for an average human being as you and me.
At the end, you end up with using a tool to store all your passwords (last trend is to use an online service with browser integration), or - admit it - store them in an Excel document protected by a password. Most IT people - and even security specialists - end with using such a mean of storage, just because they need it. The weaknesses of such solutions can be listed:
How could we trust closed source software and third-party online services?
The storage is as safe as the "master password" is safe;
If the "master password" is compromised, all your passwords are published;
You need to know the master password to add a new item to the store.
The ECC tool is able to work in "cheat mode", storing all .private key files generated passwords in an associated .cheat local file, encrypted using a cheat.public key. As a result:
Each key pair will have its own associated .cheat file, so you only unleash one key at a time;
The .cheat file content is meaningless without the cheat.private key and its master password, so you can manage and store them together with your .private files;
Only the cheat.public key is needed when creating a key pair, so you won't leak your master password, and even could generate keys in an automated way, on a distant server;
The cheat.private key will be safely stored in a separated place, only needed when you need to recover a password;
It uses strong File Encryption, with proven PBKDF, AFSplit, AES-PRNG, and ECDH/ECIES algorithms.
By default, no .cheat files are created. You need to explicitly initialize the "cheat mode", by creating master cheat.public and cheat.private key files:
>ecc cheatinit
Enter Issuer identifier text of the master cheat keys.
Will be truncated to 15-20 ascii-7 chars.
Issuer [arbou] :
Enter a private PassPhrase for the master cheat.private key (at least 8 chars).
Save this in a safe place: if you forget it, the key will be useless!
NewPass [uQHH*am39LLj] : verysafelongpassword
Enter iteration rounds for the mastercheat.private key (at least 100000).
NewRounds [100000] :
cheat.public/.private file created.
As you can see, the default number of PBKDF rounds is high (100000), and local files have been created:
Imagine you forgot about the NewKeyP@ssw0rd value. You could use the following command to retrieve it:
>ecc cheat
Enter the first chars of the .private certificate file name.
Auth: D10
Will use: D1045FCBAA1382EE44ED2C212596E9E1.private
Enter the PassPhrase of the master cheat.private file.
AuthPass: verysafelongpassword
Enter the PassPhrase iteration rounds of the cheat.private file.
AuthRounds [100000] :
{
"pass": "NewKeyP@ssw0rd",
"rounds": 60000
}
Corresponding TSynPersistentWithPassword.ComputePassword:
encryption HeOyjDUAsOhvLZkMA0Y=
authMutual lO0mv+8VpoFrrFfbBFilNppn1WumaIL+AN3JXEUUpCY=
authServer lO0nv+8VpoFrrFfbBFilNppn1WumaIL+AN3JXEUUpCY=
authClient lO0kv+8VpoFrrFfbBFilNppn1WumaIL+AN3JXEUUpCY=
If your .private key does not have its associated .cheat file, you won't be able to recover your password:
>ecc cheat
Enter the first chars of the .private certificate file name.
Auth: 8BC9
Will use: 8BC90201EF55EE34F62DBA8FE8CF14DC.private
Enter the PassPhrase of the master cheat.private file.
AuthPass: verysafelongpassword
Enter the PassPhrase iteration rounds of the cheat.private file.
AuthRounds [100000] :
Fatal exception EECCException raised with message:
Unknown file 8BC90201EF55EE34F62DBA8FE8CF14DC.cheat
In practice, this "cheat mode" will help you implement a safe public key infrastructure of any size. It will be as secure as the main cheat.private key file and its associated password remain hidden and only wisely spread, of course. Don't forget to use the ecc rekey command on a regular basis, so that you change the master password of cheat.private. The main benefit of this implementation is that for all key generation process, only the cheat.public key file is needed.
You may note here the use of FillZero() in the finally block of the function, which is a common - and strongly encouraged - way of protecting your sensitive data from remaining in RAM, after use. Both SynCrypto.pas and SynEcc.pas code has been checked to follow similar safety patterns, and not leave any sensitive information in the program stack or heap.
23.3. Application Locking
A common feature request for professional software is to prevent abuse of published applications. For licensing or security reasons, you may be requested to "lock" the execution of programs, maybe tools or services.
mORMot can use Asymmetric Cryptography to ensure that only allowed users could run some executables, optionally with dedicated settings, on a given computer. The framework offers the first brick, on which you should build upon your dedicated system.
This function will use several asymmetric key sets:
A main key set, named e.g. applock.public and applock.private, shared for all users of the system;
Several user-specific key sets, named e.g. user@host.public and user@host.secret, one for each user and associated computer host name.
When the ECCAuthorize function is executed, it will search for a local user@host.unlock file, named after the current logged user and the computer host name. Of course, the first time the application is launched for this user, there will be no such file. It will create two local user@host.public and user@host.secret files and return eaMissingUnlockFile.
The main key set will be used to digitally sign the unlock file:
applock.public will be supplied as plain base64-encoded aAppLockPublic64 text parameter in the executables - for safety, you should ensure its value is note replaced by a forged one by an attacker: the executable should be signed, or at least the constant value should be checked with a CRC for its content during the program execution;
On the contrary, applock.private will be kept secret - with its associated secret password.
User-specific key sets will be used to encrypt the unlock file:
The user@host.secret file contains in fact a genuine private key, encrypted using CryptDataForCurrentUser (i.e. DPAPI under Windows) for the specific computer and user: this will avoid user@host.unlock reuse on another computer, even if the user and host names are identical, and the user@host.secret file is copied. This file should remain local, and doesn't need to be transmitted.
The user@host.public file will be sent to the product support team, e.g. by email - but you may setup an automated server, if needed. The support team will create a user@host.unlock matching this user@host.public key, which will unlock the application for the given user.
On the support team side, a user@host.json file is created for the given user, and will contain the JSON serialization of the aContent: TObject parameter of the ECCAuthorize function. This object may contain any published properties, matching the security expectations for this user, e.g. the available features or resource access.
23.3.1. From the User perspective
The resulting process is therefore the following:
Application Unlocking via Asymmetric CryptographyIn short, every user/computer combination will have its own set of public/secret/unlock files.
In practice, applock.public could be hardcoded as plain base64-encoded aAppLockPublic64 constant string in the Application code - of course, the executable should be signed with a proper authority, to ensure this constant is not replaced by a fake value;
The location of those local user@host.* files is by default the executable folder, but may be specified via the aSearchFolder parameter - especially if this folder is read-only (e.g. due to Windows UAC), or if you use some custom GUI for the user interactivity;
The user@host.json will be signed using applock.private secret key, to testify that the resulting user@host.unlock file was indeed provided by the Support Team;
The user@host.json will be encrypted using the user@host.public key received by email, so will be specific to a single user/computer combination.
If two users share the application on the very same computer, another set of files will appear:
Application Unlocking on Two ComputersSeveral users on the same computer will be handled as such:
Application Unlocking for Two UsersFrom the User point of view, he/she will transmit its user@host.public file, then receives a corresponding user@host.unlock file, which will unlock the application. Pretty easy to understand - even if some complex asymmetric encryption is involved behind the scene.
23.3.2. From the Support Team perspective
The Support Team will maintain a list of user@host.public and user@host.json files, one per user/computer. Both files have small JSON content, so may be stored in a dedicated folder of the project source code repository - or in a dedicated repository. The use of a source code repository allows to track user management information between several support people, including history and audit trail of this sensitive information. For safety, the applock.private file may not be archived in the source code repository, but copied on purpose on each support people's (or developer's) computer. A separated, and dedicated computer, may be used, for additional safety.
In fact, even developers may define their own set of .unlock files. For local test builds, they may use their own applock.public and applock.private key pairs, diverse from the main content.
The content of each user@host.json may be easily derivated from a set of reference .json files, acting like templates of group of users. Or an existing file may be used as source for a new user. The ability to use JSON and a text editor, with customizable object and arrays fields, allows any needed kind of licensing or security scope, depending on the application. Since the user@host.json is a serialized aContent: TObject, you can define enumerates properties, or even schema-less structures as TDocVariant - see TDocVariant custom variant type - to refine the authorization scope.
The user@host.json file is encrypted using the genuine user@host.public key, and its associated user@host.secret is strongly encrypted for the given PC and logged user: therefore, only the application is able to decipher the user@host.unlock content. You can let those files be transmitted via an unsafe mean of transport, e.g. plain email, with no compromising risk. Last but not least, passwords or IP addresses can be safely stored in its content, as part of the security policy of your project.
In practice, the team may use a unlock.bat file running the ECC tool over secret applock.private keys, containing the secret:
For safety, you may not include the -pass applockprivatepassword value in this unlock.bat file. Removing this -pass command-line switch will let the ecc tool prompt for the password secret key on the console:
Also note that you can use the ecc rekey command to customize the password of a given applock.private file: each support team member may have his/her custom password to run the sign-then-encrypt process.
Of course, if you need to create a lot of .unlock files, you may want to automate this process, e.g. in a server or a GUI tool, using SynEcc.pas classes.
23.3.3. Benefits of Asymmetric Encryption for License management
In most licensing systems, the weak point is the transmission of the licensing file. Thanks to Asymmetric Encryption, both user@host.public and user@host.unlock files can be transmitted as plain emails, without any possibility of compromising.
The applock.private secret key and its associated password are used to digitally sign (using ECDSA) the plain content of the user@host.unlock file. This sign-then-encrypt pattern will ensure that only your support team will be able to generate the proper .unlock files for a given application. The applock.private/public keys could have their own deprecation date.
As we have seen, the user@host.unlock file is encrypted, so you can use it to transmit sensitive information. Its associated user@host.secret key has been generated locally with an expiration date - see the aSecretDays parameter of the ECCAuthorize function. It will ensure that the registering process should be performed regularly, if the licensing or security policy expect it.
Of course, any such system is as weak as its weakest point. In particular, under Windows the executable should be digitally signed (as any professional software). You could also ensure that the aAppLockPublic64 public key has not been replaced by a fake value forged by an attacker - e.g. by checking its value by computing its CRC in several places of your application:
ifcrc32($1239438,pointer(AppLock64),length(AppLock64))<>$ae293c10 then Close;
The security of this system does not rely on code obfuscation, but on proven safety of asymmetric encryption. Even if the executable is modified in-place to by-pass the license check, the fact that the application expects some additional information to be provided within the user@host.unlock file will make it much more difficult to hack. As always with Open Source, any feedback is welcome, in order to enhance the safety of this system. The fact that the code is available - so that the algorithms could be proven - make it safer than any proprietary solution developed in-door.
24. Domain-Driven-Design
Adopt a mORMotWe have now discovered how mORMot offers you some technical bricks to play with, but it is up to you to build the house (castle?), according to your customer needs.
This is were Domain-Driven Design - abbreviated DDD - patterns are worth looking at.
24.1. Domain
What do we call Domain here? The domain represents a sphere of knowledge, influence or activity.
As we already stated above, the domain has to be clearly identified, and your software is expected to solve a set of problems related to this domain.
DDD is some special case of Model-Driven Design. Its purpose is to create a model of a given domain. The code itself will express the model: as a consequence, any code refactoring means changing the model, and vice-versa.
24.2. Modeling
Even the brightest programmer will never be able to convert a real-life domain into its software code. What we can do is to create an abstraction system that describes selected aspects of a domain.
Modeling is about filtering the reality, for a given use context: "All models are wrong, some are useful" G. Box, statistician.
24.2.1. Several Models to rule them all
As first consequence, several models may coexist for a given reality, depending of the knowledge level involved - what we call a Bounded Context. Don't be afraid if the same reality is defined several times in your domain code: you should use only one class in a given context, but you may have another class defined in another context, with diverse attributes or methods. Just open Google maps for instance, and think how the same reality may be modeled depending on the zoom level, or you current view options. See also the M1, M2, M3 models as defined in Meta-Object Facility. When you define several models, you just need to clearly state the current model you are using.
Even models could be abstracted. This is what DDD does: the code itself is some kind of meta-model, conforming a given conceptual model to the grammar of a given programming language.
24.2.2. The state of the model
Most models express the reality in two dimensions:
Static: to abstract a given state of the reality;
Dynamic: to abstract how reality evolves (i.e. its behavior).
In both dimensions, we can clearly understand the purpose of abstraction.
Since it is impossible to model all the details of reality (e.g. describe a physical reality down to atomic / sub-atomic level), the static modeling will forget the non significant details, and focus on the essentials, for a given knowledge level, which is specific to a given context.
Similarly, most changes are continuous in the world, but dynamic modeling will create static snapshots of the reality (called state transitions), to embrace the deterministic nature of computers.
State always brings complexity to the model. As a consequence, our code should be as stateless as possible. Therefore:
Try to always separate value and time in state;
Reduce statefulness to the only necessary;
Implement your logic as state machines instead of blocking code or sessions;
Persistence should handle one-way transactions.
In DDD, Value Objects and Entity Objects are the mean to express a given system state. Immutable Value Objects define a static value. Entity refers to a given state of given identity (or reality). For instance, the same identity (named "John Doe") may be, at a given state, single and minor, then, at another state, married and adult. The model will help to express the given states, and the state transitions between them (e.g. John's marriage).
In DDD, the Factory / Repository / Unit Of Work patterns will introduce transactional support in a stateless approach.
And in situations where a reality does change its state very often, with complex impacts on other components, DDD will model these state changes as Events. It could lead into introducing some Event-Driven Design even or Event Sourcing within the global model.
24.2.3. Composition
In order to refine your model, you have two main tools at hand to express the model modularity:
Partitioning: the more your elements have a separated concern, the better;
Grouping: to express constraints, elements may be grouped - but usually, you should not put more than 6 or 8 elements in the same diagram, or your model may need to be refined.
In DDD, a lot of small objects have to be defined, in order to properly partition the logic. When we start with Object Oriented Programming, we are tempted to create huge classes with a lot of methods and parameters. This is a symptom of a weak model. We should always favor composition of small simple objects, just like the Unix tools philosophy or the Single Responsibility Principle - see SOLID design principles.
Some DDD experts also do not favor inheritance. In fact, inheriting may be also a symptom of some coupled context. Having two diverse realities sharing properties may be a bad design smell: if two or more classes inherit from one parent class, the state and behavior of the parent class may limit any future evolution of any of its children. In practice, trying to follow the Open/Closed principle - see Open/Closed Principle - at class level may induce unexpected complexity, therefore reducing code maintainability.
In DDD, the Aggregate Root is how you group your objects, in order to let constraints (e.g. business rules) to be modeled. Aggregates are the main entry point to the domain, since they should contain, by design, the whole execution context of a given process. Their extent may vary during development, e.g. when a business rule evolves - remember that the same reality can appear several times in the same domain, but once per Bounded Context. In other words, Aggregates could be seen as the smallest and biggest extent needed to express a given model context.
24.3. DDD model
It is now time to define which kind of Model-Driven Design is DDD:
Domain-Driven Design - Building Blocks
24.3.1. Ubiquitous Language
Ubiquitous Language is where DDD begins.
DDD expects the domain model to be expressed via a shared language, and used by all team members to connect their activities with the software. Those terms should be used in speech, writing, and any presentation or diagram.
In the real outside world, i.e. for the other 10th kind of people how do not know about binary, domain experts use company- or industry-standard terminology.
As developers, we have to understand this vocabulary and not only use it when speaking with domain experts but also see the same terminology reflected in our code. If the terms "class code" or "rate sets" or "exposure" are frequently used in conversation, we shall find corresponding class names in the code. In DDD, it is critical that developers use the business language in code consciously and as a disciplined rule. As a consequence, browsing the code should lead into a clear comprehension of the business model.
Domain experts will be the guard keepers of the consistency of this language, and its proper definition. Even if the terms are expected to be consistent, they are not to be written in stone, especially during the initial phase of software development. As soon as one domain activity cannot be expressed using the existing set of concepts, the model needs to be extended. Removing ambiguities and inconsistencies is a need, and will, very often, resolve several not-yet-identified software issues.
24.3.2. Value Objects and Entities
For the definition of your objects or internal data structures (what good programmers care about), you are encouraged to make a difference between several kind of objects. Following DDD, model-level representation are, generally speaking, rich on behavior, therefore also of several families/species of objects.
Let us list the most high-level definitions of objects involved to define our DDD model:
Value Objects contain attributes (value, size) but no conceptual identity - e.g. money bills, or seats in a Rock concert, as they are interchangeable;
Entity objects are not defined by their attributes (values), but by their thread of continuity, signified by an identity - e.g. persons, or seats in most planes, as each one is unique and identified.
The main difference between Value Objects and Entities is that instances of the second type are tied to one reality, which evolves in the time, therefore creating a thread of continuity.
Value objects are immutable by definition, so should be handled as read-only. In other words, they are incapable of change once they are created. Why is it important that they be immutable? With Value objects, you're seeking side-effect-free functions, yet another concept borrowed by DDD to functional languages (and not available in most OOP languages, until latest concurrent object definition like in Rust or Immutable Collections introduced in C#/.NET 4.5). When you add $10 to $20, are you changing $20? No, you are creating a new money descriptor of $30. A similar behavior should be visible at code level.
Entities will very likely have an ID field, able to identify a given reality, and model the so-called thread of continuity of this identity. But this ID is an implementation detail, only used at Persistence Layer level: at the Domain Layer level, you should not access Entities individually, but via a special Entity bounded to a specific context, called Aggregate Root (see next paragraph).
When we define some objects, we should focus on making the implicit become explicit. For instance, if we have to store a phone number, we won't use a plain string type for it, but we will create a dedicated Value object type, making explicit all the behavior of its associated reality. Then we will be free to combine all types into explicit grouped types, on need.
24.3.3. Aggregates
Aggregates are a particular case of Entities, defined as collection of objects (nested Values and/or Entities) that are grouped together by a root Entity, otherwise known as an Aggregate Root, which scope has been defined by a given execution context - see "Composition" above.
In practice, Aggregates may be the only kind of objects which will be persisted at the Application layer, before calling the domain methods: even if each nested Entity may have its own persistence method (e.g. one RDBMS table per Entity), Aggregates may be the unique access point to retrieve or update a given state. It will ensure so-called Persistence Ignorance, meaning that domain should remain uncoupled to any low-level storage implementation detail.
DDD services may just permit remote access to Aggregates methods, where the domain logic will be defined and isolated.
24.3.4. Factory and Repository patterns
DDD then favors some patterns to use those objects efficiently.
The Factory pattern is used to create object instances. In strongly-typed OOP (like in Delphi, Java or C#), this pattern is in fact its constructor method and associated class type definition, which will define a fixed set of properties and methods at compilation time (this is not the case e.g. in JavaScript or weak-typed script languages, in which you can add methods and properties at runtime). In fact, Delphi is ahead of Java or C#, since it allows virtual constructors to be defined. Those virtual constructors are in fact a clean and efficient way of implementing a Factory, and also fulfill SOLID principles, especially the Liskov Substitution Principle: the parent class define an abstract constructor on which you rely, but the implementation will take place in the overridden constructor.
Repository pattern is used to save and dispense each Aggregate Root. It matches the "Layer Supertype" pattern (see above), e.g. via our mORMotTSQLRecord and TSQLRest classes and their Client-Server ORM features, or via dedicated repository classes - saving data is indeed a concern orthogonal to the model itself. DDD architects claim that persistence is infrastructure, not domain. You may benefit in defining your own repository interface, if the standard ORM / CRUD operations are not enough.
24.3.5. DTO and Events to avoid domain leaking
The main DDD architecture principle - and benefit - is to isolate the domain code. As will be defined by the Hexagonal architecture - see below, everything is made to ensure that the domain won't "leak" outside its core. The domain objects and services are the most precious part of any DDD project, especially in the long term, so proper isolation and uncoupling sound mandatory.
The Aggregates should always be isolated and stay at the Application layer, given access to its methods and nested objects via proper high-level remote Services - see below - which should not be published directly to the outer world either.
In practice, if your domain is properly defined, most of your Value Objectsmay be sent to the outer world, without explicit translation. Even Entities may be transmitted directly, since their methods should not refer to nothing but their internal properties, so may be of some usefulness outside the domain itself.
But the real world may be rough and cruel, and optimism will better be replaced by some kind of pragmatism, and a pinch of cynicism. DDD experience told its pioneers (sometimes in a painful manner), that Adapters types should better be defined, especially at Application layer and Presentation layer levels.
As a result, a new family of objects will secure any DDD implementation:
Data Transfer Objects (DTO) are transmission objects, which purpose is to not send your domain across the wire (i.e. separate your layers, following the Anti-Corruption Layer pattern). It encourages you to create gatekeepers (e.g. in the Application layer) that work to prevent non-domain concepts from leaking into your model.
Commands and Events are some kind of DTO, since they communicate data about an event and they themselves encapsulate no behavior.
Using such dedicated types will eventually help uncoupling the domain, for several reasons:
You can refactor your domain, without the need to modify the published interfaces, but just the tiny Anti-Corruption layer: no need for your customers to spend money upgrading their client applications, just because your domain changed; no fear to refine your precious domain code, in which you put all your money and expectations, just because it may be unpleasant to your customers.
End-user application expectations won't pollute your domain. For instance, you will better define a per-customer set of public APIs, rather than exposing your domain services. In practice, a "one to rule them all" public API may sound like a good idea at first, but it will eventually end up as a monstrous, flat, unreadable and anemic interface, far away from SOLID design principles.
Since the domain tends to be as generic as possible, its objects may sometimes be overkill to the end user applications: if some properties will never be used, or will always be void, why will you pollute your end user code, and waste bandwidth or resources? Just stick to what is needed.
Dedicated types will help focusing on the needed use cases, so will ease documentation, maintainability, testing and integration with client applications: even translating your Ubiquitous language objects into more common or expected terms in the presentation layer will be beneficial.
Consider that in your company, the Domain and Infrastructure layers may be maintained by your most valuable teams, whereas some less skilled developers (or even offshore teams) may be involved on Application and Presentation layers. Writing adapter/translator classes is not difficult, and will help your company focus and invest on where long term ROI is more likely to appear. Some access restrictions may therefore appear at source code level: it may be safe that only the wiser programmers will be allowed to modify the domain code, and even hide the domain implementation by publishing only its interfaces, protecting your most valuable intellectual property from being copied and stolen.
In mORMot, we try to let the framework do all the plumbing, letting those types be implemented via interfaces over simple dedicated types like records or dynamic arrays - see Service Methods Parameters and Asynchronous callbacks. So defining DTOs, Commands and Events in dedicated Anti-Corruption layers will be pretty much quick, easy and safe.
24.3.6. Services
Aggregate roots (and sometimes Entities), with all their methods, often end up as state machines, and the behavior matches accordingly. In the domain, since Aggregate roots are the only kind of entities to which your software may hold a reference, they tend to be the main access point of any process. It could be handy to publish their methods as stateless Services, isolated at Application layer level.
Domain services pattern is used to model primary operations. Domain Services give you a tool for modeling processes that do not have an identity or life-cycle in your domain, that is, that are not linked to one aggregate root, perhaps none, or several. In this terminology, services are not tied to a particular person, place, or thing in my application, but tend to embody processes. They tend to be named after verbs or business activities that domain experts introduce into the so-called Ubiquitous Language. If you follow the interface segregation principle - see Interface Segregation Principle, your domain services should be exposed as dedicated client-oriented methods. Do not leak your domain! In DDD, you develop your Application layer services directly from the needs of your client applications, letting the Domain layer focus on the business logic.
Unit Of Work can be used to maintain a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems. In short, it implements transactional process at Domain level, and may be implemented either at service or ORM level. It features so-called Persistence Ignorance, meaning that your domain code may not be tied to a particular persistence implementation, but "hydrate" Aggregate roots class instances as abstractly as possible. A dual-phase commit approach - with some methods preparing and validation the data, then applying it by a dedicated Commit command in a second step - may be defined. In this pattern, the repository is just some simple storage, and data consistency will take place at domain level: for instance, you will not define any SQL constraints, but validate your data before storing the information. Your business rules should be written in high level domain code, and you may forget about the FOREIGN KEY, or CHECK SQL syntax flavors. As a result, you may safely change from a SQL database to a NoSQL engine, or even a TObjectList. You will be able to define and maintain any complex business rules, using the Ubiquitous Language of your domain. And a change of business logic will not impact the database metadata, which may be painful to modify.
The DDD Services may therefore be stateless for most of the time, but allowing some flavor of transactional process, when needed. The uppermost/peripheral architecture layers - i.e. Application or Presentation Layers - will ensure that those services will be propertly orchestrated. The application workflows will not be defined in the domain core itself, but in those outer layers, resulting in a cleaner, uncoupled architecture.
24.3.7. Clean Uncoupled Architecture
If you follow properly the DDD patterns, your classic Multi-tier architecture architecture will evolve into a so-called Clean Architecture or Hexagonal architecture.
Even if physically, this kind of architecture may still look like a classic layered design (with presentation on the top, business logic in the middle and a database at the bottom - and in this case we speak of N-Layered Domain-Oriented Architecture), DDD tries to isolate the Domain Model from any dependency, including technical details.
As a consequence, the logical architecture of any DDD solution should appear as such:
Clean Uncoupled Domain-Oriented ArchitectureThat kind of architecture is not designed in layers any more, but more like an Onion.
At the core of the bulb - sorry, of the system, you have the Domain Model. It implements all Value Objects and Entity Objects, including their state and behavior, and associated unit tests.
Around this core, you find Domain Services which add some more behavior to the inner model. Typically, you will find here abstract interfaces that provides persistence (Aggregates saving and retrieving via the Repository pattern), let Domain objects properties and methods be defined (via the Factory pattern), or access to third-party services (for service composition in a SOA world, or e.g. to send a notification email).
Then Application Services will define the workflows of all end-user applications. Even if the core Domain is to be as stable as possible, this outer layer is what will change more often, depending on the applications consuming the Domain Services. Typically, workflows will consist in deshydrating some Aggregates via the Repository interface, then call the Domain logic (via its objects methods, or for primary operations with wider Domain services), call any external service, and validate ("commit", following Unit-Of-Work or transactional terms) objects modifications. Some non data-centric process will also benefit from a dual-phase commit pattern, to allow safe orchestration of uncoupled domain and third party services.
Out on the edges you see User Interface, Infrastructure (including e.g. database persistence), and Tests. This outer layer is separated from the other three internal layers, which are sometimes called Application Core. This is where all technical particularities will be concentrated, e.g. where RDBMS / SQL / ORM mapping will be defined, or platform-specific code will reside. This is the right level to test your end-user workflows, e.g. using Behavior-Driven Development (abbreviated BDD), with the help of your Domain experts.
The premise of this Architecture is that it controls coupling. The main rule is that all coupling is toward the center: all code can depend on layers more central, but code cannot depend on layers further out from the core. This is clearly stated in the Clean Uncoupled Domain-Oriented Architecture diagram: just follow the arrows, and you will find out the coupling order. This architecture is unashamedly biased toward object-oriented programming, and it puts objects before all others.
This Clean Architecture relies heavily on the Dependency Inversion principle - see SOLID design principles. It emphasizes the use of interfaces for behavior contracts, and it forces the externalization of infrastructure to dedicated implementation classes. The Application Core needs implementation of core interfaces, and if those implementing classes reside at the edges of the application, we need some mechanism for injecting that code at runtime so the application can do something useful. mORMot's Client-Server services via interfaces provide all needed process to access, even remotely, e.g. to persistence or any third party services, in an abstract way.
With Clean Architecture, the database is not the center of your logic, nor the bottom of your physical design - it is external. Externalizing the database can be quite a challenge for some people used to thinking about applications as "database applications", especially for Delphi programmers with a RAD / TDataSet background. With Clean Architecture, there are no database applications. There are applications that might use a database as a storage service but only though some external infrastructure code that implements an interface which makes sense to the application core. The domain could be even decoupled from any ORM pattern, if needed. Decoupling the application from the database, file system, third party services and all technical details lowers the cost of maintenance for the life of the application, and allows proper testing of the code, since all Domain interface types could be mocked on purpose - see Stubs and mocks.
24.4. mORMot's DDD
24.4.1. Designer's commitments
Before going a bit deeper into the low-level stuff, here are some key sentences we should better often refer to:
I shall collaborate with domain experts;
I shall focus on the ubiquitous language;
I shall not care about technical stuff or framework, but about modeling the Domain;
I shall make the implicit explicit;
I shall use end-user scenarios to get real and concrete;
I shall not be afraid of defining one model per context;
I shall focus on my Core Domain;
I shall let my Domain code uncoupled to any external influence;
I shall separate values and time in state;
I shall reduce statefulness to the only necessary;
I shall always adapt my model as soon as possible, once it appears inadequate.
As a consequence, you will find in mORMot no magic powder to build your DDD, but all the tools you need to focus on your business, without loosing time in re-inventing the wheel, or fixing technical details.
24.4.2. Defining objects in Delphi
How to implement all those DDD concepts in an object-oriented language like Delphi? Let's go back to the basics. Objects are defined by a state, a behavior and an identity. A factory helps creating objects with the same state and behavior.
In Delphi and most Object-Oriented (OOP) languages - including C# or Java, each class instance has the following behavior:
State is defined by all its property / member values;
Behavior are defined by all its methods;
Identity is defined by reference, i.e. a=b is true only if a and b refers to the same object;
Factory is in fact the class type definition itself, which will force each instance to have the same members and methods.
In Delphi, the record type (and deprecated object type for older versions of the compiler) has an alternative behavior:
State is also defined by all its property / member values;
Behavior are also defined by all its methods;
But identity is defined by content, i.e. RecordEquals(a,b) is true only if a and b have the same exact property values;
Factory is in fact the record / object type definition itself, which will force each instance to have the same members and methods.
In practice, you may use either one of the two kinds of object types (i.e. either class or record), depending on the behavior expected by DDD patterns:
But other kinds of DDD objects , i.e. Value Objects, Entity Objects and Aggregates, should better be defined as dedicated class, since class type definition offers more possibility than plain record structures. The framework defines some parent classes (e.e. TSynPersistent and TSynAutoCreateFields) which makes working with class instances almost as easy than stack-allocated record values.
24.4.3. Defining DDD objects in mORMot
When defining domain objects, we should always make the implicit explicit, i.e. writing one class type per reality in the model, in every bounded context. Thanks to Delphi's strong typing, you will ensure that the Domain Ubiquitous language will appear in the code, and that your model will be expressed in a clean, uncoupled way.
If those class types are defined as plain PODO, even your domain experts - which may not know anything about writing code - may be part of the class definition: we usually write the domain objects and services with the domain experts, writing the code in real time during a meeting. The domain is therefore expressed as plain code, and experts are able to validate the workflows and properties of the model as soon as possible. Such coding sessions truly benefit of being a cooperative team work, not only coders'.
Once the domain model is stabilized, we may start implementing the interfaces using this common work as contract. In this implementation process, the mORMot framework offers a lot of tools to make it happen in a quick and efficient manner.
There are in fact two ways of implementing DDD objects as class types, in mORMot:
Directly using the framework types, e.g. TSQLRecord specialized class for Entities or Aggregates;
Or relying of no framework structure, but clean PODOs (Plain Old Delphi Object - see so-called POJO or POCO for Java or C#) class types, then use the mORMotDDD.pas unit for automatic marshalling.
Of course, the second option may be preferred, since it sounds like a better implementation path, uncoupled from the framework itself. Remember that DDD is mainly about uncoupling the Domain code from any external dependency, even from mORMot itself. You should better not be forced to use the framework ORM, if you have some existing legacy SQL statements, for instance.
24.4.3.1. Use framework types for DDD objects
If you want to directly use framework structure, DDD's Value Objects are probably meant to be defined as record, with methods (i.e. in this case as object for older versions of Delphi). You may also use TComponent or TSQLRecord classes, ensuring the published properties do not have setters but just read F... definition, to make them read-only, and, at the same time, directly serializable. If you use record / object types, you may need to customize the JSON serialization - see Record serialization - when targeting AJAX clients, especially for any version prior to Delphi 2010 (by default, records are serialized as binary + Base64 encoding due to the lack of enhanced RTTI, but you can define easily the record serialization e.g. from text). Note that since record / object defines in Delphiby-value types (whereas class defines by-reference types - see previous paragraph), they are probably the cleanest way of defining Value Objects.
In this context, DDD's Entity objects could inherit from TSQLRecord. It will give access to a whole set of methods supplied by mORMot, implementing some kind of "Layer Supertype", as explained by Martin Fowler.
For most simple cases, this solution may be just good enough. But it may have the drawback of coupling your Domain logic with mORMot internals. Your Domain will eventually be polluted by the framework implementation details, which should better be avoided.
24.4.3.2. Define uncoupled DDD objects
In order to uncouple our Domain code from its persistence layer, mORMot offers some dedicated types and units to use PODO class definitions within your DDD core.
You may use regular TPersistent as parent class, but you may consider using TSynPersistent and TSynAutoCreateFields fields instead - we will see soon their benefit.
Let's start from existing code, available in the SQLite3\DDD\dom sub-folder of the framework source code repository, in the dddDomUserTypes.pas unit. This unit defined some reusable class types, able to store user information, in a clean DDD way.
24.4.3.3. Specialize your simple types
Each reality in this unit will have its own type definition, using the extended pascal syntax, even for simple types like string or integer:
type
TSpecifiedType = type TParentType;
You may not be familiar with this syntax. But it is a pretty powerful mean of defining your DDD model with a plain pascal syntax. Here TSpecifiedType is defined as a specific type, which will behave like TParentType, but strong-typing will apply in your code, so that the compiler will complain if you pass e.g. a TParentType instead of a TSpecifiedType as a var parameter. It will help to resolve some ambiguities when transmitting information.
Thanks to those type definitions, you will be able to make a difference between a last name, a first name and a middle name. We used RawUTF8 as parent type, but we may have used string. Since we wanted our code to work seamlessly with all versions of Delphi and FPC, we rather rely on RawUTF8 - see Unicode and UTF-8.
Once compiled, there won't be any difference between the three types, which will behave like a RawUTF8. But at compile time, and in your Domain source code, you will be able to know exactly which reality is stored in a given variable.
So instead of this method definition:
function UserExists(const aUserName: RawUTF8): boolean;
You will rather write:
function UserExists(const aUserName: TLastName): boolean;
With such a method signature, we will ensure that we won't supply a TFirstName or a TPetName by mistake.
It may sound like a small enhancement, but be sure that it will increase your code safety, and expressiveness. One of the biggest failure in NASA history was Mars Climate Orbiter. A variable type error burn up a $327.6 million project in minutes, when one engineering group working on the thrusters measured in English units of pounds-force seconds, whereas the others used metric Newton-seconds. The result of that inattention is now lost in space, possibly in pieces.
Remember when our physic teachers leaped all over answers that consisted of a number. If the answer was 2.5, they will take their red pens and write "2.5 what? Weeks? Puppies? Demerits?" And proceed to mark the answer wrong. In our DDD code, we should rather follow this rule, and try to make the implicit explicit.
24.4.3.4. Define your PODO classes
Main point is first to define your DDD Objects as plain Delphi class types - the famous PODOs, following the Ubiquitous Language. We will in fact define Value Objectsclass types, which may be grouped and nested to become Entity Objects or Aggregates.
To define a TPerson object, able to modelize a person identity, we may write the following classes:
TSynAutoCreateFields inherits from TSynPersistent, and its overridden Create will allocate all published class properties auto-magically - whereas its overridden Destroy will release those instances for you. As such, inheriting from TSynAutoCreateFields makes it a perfect fit for a Value Object, nesting sub objects as properties;
Both have the RTTI enabled, so all published properties will be easily serialized as JSON (when used as DTO), or persisted later on on a database, when joined as Aggregate Roots.
In the above code, we defined TPerson.Name as a TPersonFullName class. So that we may use aPerson.Name.First or aPerson.Name.Last or even the runtime-computed aPerson.Name.FullName method which is able to display the full name, depending on per-country culture. We also reintroduced the Equals() method, which will allow to compare the objects per value, and not per reference.
Even if the birth date is just a date, we introduced a dedicated TPersonBirthDate class. The benefit is to have the overloaded Age() methods, which are pretty convenient in practice.
Once serialized as JSON, a TPerson content may be:
During the modelization phase, you will just define such class types, trying to reflect DDD's Ubiquitous Language into regular Delphi classes.
Take a look at the dddDomUserTypes.pas unit, to identify such patterns, and how we may be able to define an application user, gathering our TPerson class with a TAddress, in which a TCountry class will be used to store the corresponding country:
/// a Person object, with some contact information// - an User is a person, in the context of an applicationTPersonContactable = class(TPerson)
protected
fAddress: TAddress;
fPhone1: TPhoneNumber;
fPhone2: TPhoneNumber;
fEmail: TEmailAddress;
publicfunction Equals(another: TPersonContactable): boolean; reintroduce;
publishedproperty Address: TAddressread fAddress;
property Phone1: TPhoneNumberread fPhone1 write fPhone1;
property Phone2: TPhoneNumberread fPhone2 write fPhone2;
property Email: TEmailAddressread fEmail write fEmail;
end;
You can see that we did not pollute the class definition with any detail about persistence. What we did by now was to define a plain Value Object. We did not even specify that this class may be any Entity, nor introduce a primary key to identify it from a single access point. We found this way much cleaner that the approach of most other Java or C# DDD frameworks, which usually require to inherit from a parent Entity class, or use attributes to define the persistence expectations (like the primary key). We think that the domain types should not be polluted with those implementation details, and focus on expressing the model.
We will finally define a TUserEntity (or Aggregate Root), inheriting from TPersonContactable, i.e. modelizing any application user account with all its personal information, with a flag to testify that its email was validated:
Such a TPersistent-inheriting class could be used as a Value Object (or even a DTO), but become an Entity or Aggregate in the bounded context of the user account personal information. In order to store this data, we will now define an interface, implementing a Persistence Service.
24.4.3.5. Store your Entities in CQRS Repositories
When persisting our precious DDD Objects, the framework tries to follow some DDD patterns:
Define Aggregate Root (or Entities) from Value Objects, as practical data context for storing the information;
Use a Repository service to store those Aggregates instances;
Follow CQRS (Command Query Responsibility Segregation) via a dedicated dual interface, splitting reads (Queries) and writes (Commands) in the Repository contract;
Use Factory to instantiate CQRS Repository contracts on need.
In practice, we will use a Factory to create Repositoryclass instances implementing the CQRS service methods, defined as a hierachy of interface types, for a given Aggregate Root. Let's start from an example, i.e. implement CQRS Repository services for our TUserclass.
24.4.3.5.1. CQRS Interfaces
The mORMotDDD.pas unit defines the following interface, which will benefit of being the root interface of all Repository services:
typeICQRSService = interface(IInvokable)
['{923614C8-A639-45AD-A3A3-4548337923C9}']
function GetLastError: TCQRSResult;
function GetLastErrorInfo: variant;
end;
This interface does nothing but allowing a generic access to the last error which occurred. This will be used instead of Exception, via the TCQRSResult enumeration, as a safe way of handling errors in a remote Service.
Exceptions are very convenient when running code in a process, but are difficult to handle over a remote connection, since the execution context is spread on both client and server sides. It is very difficult to propagate an exception raised on the server side to the client side, without leaking the server implementation. For instance, the SOAP standard provides a way of transmitting execution errors as dedicated XML messages - but it turns out to be a very verbose and complex path.
In mORMot, we defined a generic way of sending errors to the client side, for CQRS Services. By convention, any method will be defined as a function, returning its execution state as a TCQRSResult enumeration. If cqrsSuccess is returned, no error did happen on the server side, and execution may continue on the client side. Otherwise, an error "kind" is specified in the TCQRSResult transmitted value, and additional information is available as string or a TDocVariant custom variant type in the ICQRSService.GetLastErrorInfo method. This allows to safely handle any kind of execution error on the client side, without the need to define dedicated exceptions. As we already stated about Error handling, exception should be exceptional - please refer to this paragraph for more details, including the benefit of that any stubbed or mocked interface will return cqrsSuccess (i.e. 0) by default, so let the test pass.
For our TUserCQRS Repository service, we will therefore define two interface types, one inheriting from ICQRSService for the Queries methods, and another one inheriting from this later interface to define the Commands methods:
CQRS Repository Service Interface for TUserIn dddDomUserCQRS.pas, we therefore defined two interface types, one IDomUserQuery for the read operations (i.e. Queries) of TUser aggregates, and an inherited IDomUserCommand for the write operations (i.e. Commands) of TUser aggregates.
We may argue that IDomUserCommand inheriting from IDomUserQuery is actually a violation of the Command Query Responsibility Segregation principle. Here, Commands are tied to Queries. Of course, we may have defined two diverse interfaces, both inheriting from ICQRSService as parent:
CQRS Dogmatic Repository Service Interface for TUserNothing prevent you from doing this. But in our case, especially with the mORMot underlying ORM, or a RDBMS database, the benefit is not obvious - sounds more like a dogmatic approach. To update a resource, you will need two interfaces: one IDomUserQuery instance to retrieve the existing value object, then one IDomUserCommand to modify it. From our pragmatic point of view, it is not mandatory. Also note that interface inheritance may differ from actual implementation class inheritance. IDomUserCommand may inherit from IDomUserQuery, but, e.g. if performance matters, you may still be able to implement a plain IDomUserQuery service with a dedicated class, on a separated database. In our case, interface inheritance is a common way of increasing code reuse. So if you want to be dogmatic about CQRS, you could - but only if it is worth the effort.
24.4.3.5.2. Queries Interface
Since we will separate queries and commands, we will first define the interface for actually reading TUser information:
As we stated previously, all those methods do return a TCQRSResult enumeration, which will be used on the service consumer side to notify on any execution error.
Instances of those interface will in fact have a limited life-time. To access the TUser persistence layer, a CQRS interface will be injected - via Dependency Injection and Interface Resolution, then allow to handle one or several TUser instances.
For queries, you could use IDomUserQuery.SelectByLogonName, IDomUserQuery.SelectByLastName or IDomUserQuery.SelectByEmailValidation methods to initialize a request. As you can see, there is no mention of primary key or ID in this interface definition. Even if under the hood, the implementation may use our ORM, and a TSQLRecord with its TSQLRecord.ID: TID property, the CQRS interface themselves make not those implementation details appear - unless it will be necessary. In our use case of an application targeting a single user, it is enough to be able to retrieve a user by its logon name, or by its last name.
If the Select* method executed without error (i.e. returned cqrsSuccess), we can later on retrieve the content by calling:
IDomUserQuery.Get for filling the properties of a single already existing TUser object);
Since the IDomUserQuery interface has a lifetime, you could call IDomUserQuery.Get or IDomUserQuery.GetAll several times after a single Select*. Note that in the common ORM-based implementation we will define below, the TUser information is actually retrieved and stored in memory by the Select* method.
Note that in the IDomUserQuery contract, the IDomUserQuery.HowManyValidatedEmail method, on the other hand, is stateless, and could be used without any prior Select*. Such methods may appear, depending on the Domain expectations.
The main point here is that, when defining your CQRS interface, you should focus on which data you need to access, in the most convenient way for you, and forget about the real persistence implementation - i.e. how data is stored. This is called, in DDD methods, as Persistence Ignorance, and is a very convenient way of uncoupling your business logic from actual technical details. If you was never asked by your commercials to support a new database engine, or even be able to switch from a SQL to a NoSQL storage, or an existing legacy proprietary obscure database used by a given customer... you are a lucky programmer, but - you know - it happens in real life!
Another advantage of starting from what you need in your domain, by using interface types as contracts, is that you will probably focus on the domain, and may avoid the risk of an anemic domain model symptom, which appears when your persistence service is just a CRUD operation in disguise. If we need only CRUD operations, an ORM, or even plain SQL is enough. But if we want to have our domain code follow the ubiquitous language, and stick to the use cases of our business model, we should better design the persistence this way.
Last but not least, you will be able to mock or stub the persistence service - see Stubs and mocks, so ease unit test of your Domain code, without any dependency to any actual database layer. Following Test Driven Design, you will even be able to write the Domain core tests first, validate all your interfaces, even write the Application layer and test it with the current mock-up of the end-user application, and eventually finalize and tune the SQL or NoSQL storage at the final step, when the whole workflow is stabilized. It will help testing sooner, therefore fix sooner, and... hopefully release sooner.
24.4.3.5.3. Commands Interface
Following the CQRS (Command Query Responsibility Segregation) principle, we defined the write operations (i.e. Commands) in a separate interface. This type will inherit from IDomUserQuery, since it may be convenient to be able to first read the TUser, for instance before applying a modification to the stored information, like updating existing data, or adding some a missing entry.
The main method of this Command interface is Commit. Following the dual-phase commit pattern, nothing will be written to the actual persistence storage unless this IDomUserCommand.Commit method is actually called.
In short, you query then update your data using the other Add/Update/Delete/... methods, then you run Commit.
For instance, to modification an existing record, you will call:
If the logon name is unknown, an error will raise at the first step. If the updated modification transmitted at the second step is invalid (i.e. you forgot to fill a mandatory field, or a value which should be unique, like a serial number, appear to exist already), then another error will be reported. But even after a successful Update, nothing will be stored in the database. Why? Because in most use cases, you will probably need to synchronize several operations: for instance, you may have to send an email, or call a third-party service, and write the new data only if everything was right. As such, you will need a two-phase write operation: first, you prepare and validate your data on each involved service, then, once everyone did give its green light, you eventually launch the process, which is, in the case of a persistence layer, calling Commit. In a real application, an unexpected low-level error may happen during the Commit phase - e.g. a network failure, a concurrency issue, or a problem between a chair and a keyboard - but it will not be likely to happen often. The dual-phase commit will ensure that most errors will be identified during the first phase, using our ORM's Filtering and Validating abilities.
Of course, if you want to run the IDomUserCommand.Add method, no prior IDomUserQuery.Select* call is mandatory. But for Update and Delete or DeleteAll commands, you will need first to define the data extend you will work on, by a previous call to Select*.
To use those CQRS interfaces, you could use IoC as usual:
var cmd: IDomUserCommand;
user: TUser;
itext: RawUTF8;
...
aServer.Services.Resolve(IDomUserCommand,cmd);
user := TUser.Create;
tryfor i := 1 to MAX dobeginUInt32ToUtf8(i,itext);
user.LogonName := ' '+itext;
user.EmailValidated := evValidated;
user.Name.Last := 'Last'+itext;
user.Name.First := 'First'+itext;
user.Address.Street1 := 'Street '+itext;
user.Address.Country.Alpha2 := 'fr';
user.Phone1 := itext;
if cmd.Add(user)<>cqrsSuccess thenraise EMyApplicationException.CreateFmt('Invalid data: %s',[cmd.GetLastErrorInfo]);
end;
// here nothing is actually written to the databaseif cmd.Commit<>cqrsSuccess thenraise EMyApplicationException.CreateFmt('Commit error: %s',[cmd.GetLastErrorInfo]);
// here everything has been written to the databasefinally
user.Free;
end;
This dual-phase commit appears to be a clean way of implement the Unit Of Work pattern. Under the hood, when used with our ORM - as we will now explain - Unit Of Work will be expressed as a I*Command service, uncoupled from the persistence layer it runs on.
24.4.3.5.4. Automated Repository using the ORM
As you may have noticed, we did just defined the interface types we needed. That is, we have the contract of our persistence services, but no actual implementation of it. As such, those interface definitions are useless. Luckily for us, the mORMotDDD.pas unit offers an easy way to implement those using Object-Relational Mapping, with minimal coding.
24.4.3.5.4.1. DDD / ORM mapping
First we will need to map our domain object (i.e. our TUser instance and its properties) into a TSQLRecord. We may do it by hand, but you may find an handy way. Just run the following in the context of your application:
This class procedure will create a ddsqlrecord.inc file in the executable folder, containing the needed field definition, with one TSQLRecord type corresponding to each hierarchy level of the original TPersistent definition. Nested fields will be defined as a single column in the TSQLRecord, e.g. Address.Country.Iso will be flattened as a Address_Country property.
So if we follow the class hierarchy, we will have:
CQRS Class Hierarchy Mapping for ORM and DDD EntitiesWhich will be defined as such in the ddsqlrecord.inc generated content:
You may wonder why we will introduce a separate level of classes, between the DDD Aggregates and the database engine. Why not directly persist the Domain objects (as most DDD implementations do)?
In fact, our approach has several benefits:
Most of the time, simple mapping will be done automatically: once you called TDDDRepositoryRestFactory.ComputeSQLRecord, there is a very little additional coding to be done;
But you still have access to the full mapping process, not using attributes (which may sound convenient, but are polluting the DDD classes definition), but at Persistence Service method level;
You could persist the same DDD classes as Value Objects, Entities or Aggregates, depending on the use context, by a custom mapping over a dedicated Persistence Service - your domain objects are uncoupled from their use context - remember that the same Value Object may become an Aggregate, or an Entity, depending on the context: why define again and again the same classes? just reuse the same tuned types via Composition;
No need to inherit your DDD classes from a parent Entity class, or pollute it with an ID field (as most DDD implementations do);
TSQLRecord allows to be truly persistent agnostic: you may do the storage on a regular RDBMS engine, on a NoSQL database, or in memory, at runtime, without touching your DDD objects;
Practice did show that introducing ORM concepts at DDD classes level: just think about how the ID field may break your modelization, since the same object may be a Value Object in a context (so without any ID), but an Entity or an Aggregate in another context (so an ID is probably needed there) - it does indeed break the Persistence Ignorance pattern, and tend to produce an anemic domain model, i.e. CRUD operations in disguise;
TSQLRecord classes give you direct access to how your data will be actually stored: most ORMs, when dealing with complex classes (like our Domain objects), tend to hide the mapping complexity, and therefore make it difficult to debug and tune the storage itself: which object field is mapped to which column? which tables are involved and joined for the queries? - whereas TSQLRecord make it clear how data will actually been stored: you may consider the TSQLRecord properties as a map of the SQL storage columns, or as the document stored in a NoSQL engine - database reuse and tuning will definitively be easier, when the TSQLRecord type definition shows you e.g. where the indexes should be created;
You are not tied to use TSQLRecord: you can easily define a mORMotDDD.pas CQRS repository service fully abstracted from mORMot's ORM, e.g. using existing tuned SQL statements, or any other mean of storage;
Also consider that you are able to easily Stubs and mocks the CQRS persistence service, whereas a direct ORM-oriented implementation will force you to create fake databases.
If you worry about performance of adding such a layer, you may be confident it won't be a bottleneck: the CQRS mapping shares the same code than the framework ORM for RTTI and marshalling. Mapping process is just a fast loop over the properties, using cached RTTI, and assigning all content by reference, avoiding most memory allocations or content transformation.
24.4.3.5.4.2. Define the Factory
Since the generated TSQLRecordUser type follows known conventions, the mORMotDDD.pas unit is able to do almost all the persistence work in an automated way, by inheriting of two classes:
As you can see, the main point of this constructor is to supply the right parameters to the inherited TDDDRepositoryRestFactory.Create:
We would like to implement a IDomUserCommand contract - and, by the way, implement also its parent IDomUserQuery interface;
The actual implementation class will be TInfraRepoUser - which will be defined just after;
The Aggregate/Entity class is a TUser kind of object;
The associated TSQLRest server will be the one supplied to this class;
The ORM class, defining the actual SQL table or NoSQL collection which will store the data, is TSQLRecordUser;
An optional TDDDRepositoryRestManager instance may be supplied as owner of this factory - but it is not used in most cases.
The AddFilterOrValidate() method allows to set some Filtering and Validating expectations at DDD level. Those rules will be applied before Commit will take place, without any use of the ORM rules. In the above code, TSynFilterTrim will remove any space from all text fields of the TUser instance, and TSynValidateNonVoidText will ensure that the TUser.LogonName field will not be '' - after space trimming. You may consider those rules as the SQL constraints you may be used to. But since they will be defined at DDD level, they will apply on any database back-end, even if it does not support any constraint - e.g. if it is a NoSQL engine, or a third-party persistence service you do not have the hand on.
Note that InjectResolver() should be called beforeServiceDefine(), otherwise the IoC won't take place as expected, and the TInfraRepoUserFactory class will be nil.
The CQRS services should be defined as sicClientDriven - and not as sicSingle or sicShared, since their lifetime is expected to be synchronized by the consumer side, i.e. the interface variable use on the client side.
On the client side, defining IDomUserCommand is enough to be able to use both IDomUserCommand and IDomUserQuery services, but on the server side you will have to explicitly define both interfaces, otherwise the Client/Server contracts won't match and you will not be able to use IDomUserQuery from the client side.
Note that we defined the TInfraRepoUser class as implementing both interface we need, via = class(...,IDomUserCommand,IDomUserQuery). We need both types to be explicit in the class type definition, otherwise, IoC - i.e. aServer.Services.Resolve() calls - won't work for both.
As you can see, some methods appear to me missing. There is no Commit, nor Delete - which are required by IDomUserCommand. But in fact, those commands are so generic that they are already implemented for you in TDDDRepositoryRestCommand!
What we need know is to implement those methods, using the internal protected ORM*() methods inherited by this parent class:
functionTInfraRepoUser.SelectByLogonName(const aLogonName: RawUTF8): TCQRSResult;
begin
result := ORMSelectOne('LogonName=?',[aLogonName],(aLogonName=''));
end; functionTInfraRepoUser.SelectByEmailValidation(aValidationState: TDomUserEmailValidation): TCQRSResult;
begin
result := ORMSelectAll('EmailValidated=?',[ord(aValidationState)]);
end; functionTInfraRepoUser.SelectByLastName(const aName: TLastName; aStartWith: boolean): TCQRSResult;
beginif aStartWith then
result := ORMSelectAll('Name_Last LIKE ?',[aName+'%'],(aName='')) else
result := ORMSelectAll('Name_Last=?',[aName],(aName=''));
end; functionTInfraRepoUser.Get(out aAggregate: TUser): TCQRSResult;
begin
result := ORMGetAggregate(aAggregate);
end; functionTInfraRepoUser.GetAll(out aAggregates: TUserObjArray): TCQRSResult;
begin
result := ORMGetAllAggregates(aAggregates);
end; functionTInfraRepoUser.GetNext(out aAggregate: TUser): TCQRSResult;
begin
result := ORMGetNextAggregate(aAggregate);
end; functionTInfraRepoUser.Add(const aAggregate: TUser): TCQRSResult;
begin
result := ORMAdd(aAggregate);
end; functionTInfraRepoUser.Update(const aUpdatedAggregate: TUser): TCQRSResult;
begin
result := ORMUpdate(aUpdatedAggregate);
end; functionTInfraRepoUser.HowManyValidatedEmail: integer;
beginif ORMSelectCount('EmailValidated=%',[ord(evValidated)],[],result)<>cqrsSuccess then
result := 0;
end;
Almost everything is already defined at TDDDRepositoryRestCommand level. Our TInfraRepoUser class, implementing a full CQRS service, fully abstracted from the ORM, is implemented by a few internal ORM*() method calls.
All the data access via the TSQLRecordUser REST persistence layer, with any Filtering and Validating defined rule, is also incorporated in TDDDRepositoryRestCommand. The conversion to/from TUser properties has been optimized, so that fields will be moved by reference, with no memory allocation nor content modification, for best performance and data safety. The type mapping specified by TInfraRepoUserFactory.Create is enough to make the whole process as automated as possible.
In fact, our TInfraRepoUserclass is just a thin wrapper forcing use of strong typing in its methods parameters (i.e. using TUser/TUserObjArray whereas the ORM*() methods are more relaxed about actual typing), and ensuring that the ORM specificities are followed as expected, e.g. a search against the TUser.Name.Last DDD field will use the TSQLRecordUser.Name_Last ORM column, with the proper LIKE operator.
Internally, TDDDRepositoryRestCommand.ORMPrepareForCommit will call all DDD and ORM TSynFilter and TSynValidate rules, as previously defined. It sounds indeed like a real advantage not to wait until the database layer is reached, to have those constraints verified. The sooner an error is notified, the better - especially in a complex SOA system.
DDD's DTO may also be defined as record, and directly serialized as JSON via text-based serialization. Don't be afraid of writing some translation layers between TSQLRecord and DTO records or, more generally, between your Application layer and your Presentation layer. It will be very fast, on the server side. If your service interfaces are cleaner, do not hesitate.
But defining DTO types, just for uncoupling, may become time consuming. If you start writing a lot of wrapping code, forget about it, and expose your Domain Value Objects or even your Entities, as stated above. Or automate the wrapper coding, using RTTI and code generators. You have to weight the PROs and the CONs, like always... And never forget to write proper unit testing of this marshalling code, since it may induce some unexpected issues.
If you expect your DDD's objects to be schema-less or with an evolving structure (e.g. for DTO), depending on each context, you may benefit of not using a fixed type like class or record, but use TDocVariant custom variant type. This kind of variant will be serialized as JSON, and allow late-binding access to its properties (for object documents) or items (for array documents). In the context of interface-based services, using per-reference option at creation (i.e. _ObjFast() _ArrFast() _JsonFast() _JsonFmtFast() functions) does make sense, in order to spare the server resources.
24.4.4. Defining services
In practice, mORMot's Client-Server architecture may be used as such:
Services via methods - see Client-Server services via methods - can be used to publish methods corresponding to your aggregate roots defined as TSQLRecord. This will make it pretty RESTful compatible.
Services via interfaces - see Client-Server services via interfaces - can be used to publish all your processes. Dedicated factories can be used on both Client and Server side, to define your repositories and/or domain operations.
Client-Server services via methods may be preferred if you expect your service to be consumed in a truly RESTful way. But since in DDD you should better protect your Domain via a dedicated Adapter layer, such compatibility should be an implementation smell. In practice, Client-Server services via interfaces will offer better integration and automation of its process, e.g. parameter type validation (with JSON marshalling), session handling, interface-level multi-threading and security abilities, logging, ability to be emulated via Stubs and mocks, and - last but not least - Publish-subscribe for events.
24.4.5. Event-Driven Design
Event-Driven could be implemented in mORMot by at least two ways:
Both ways have their own benefits and drawbacks, and you may pick up the one which match your particular use case. The first may be more easy to implement and versatile to use, but the second will work with off-line periods, and
24.4.5.1. Events as Callbacks
DDD's Events could easily be implemented as Asynchronous callbacks, when an interface callback is defined as Service Methods Parameters. In this case, the interface type will define the various DDD events, ready to be notified and propagated in real-time across the whole system.
An application layer may provide a specific callback to the domain, which will push the notification as a regular Delphi call, but in fact transmitted via WebSockets from the corresponding Domain Service to the right application layer. The current implementation relies on WebSockets for remote access, but other protocols may be available in the future, since interface parameters callbacks may be implemented by any actual transmission class.
No need to encapsulate your events within a dedicated message class (as most Event-Driven implementations require), or pollute your Domain code to follow a fixed protocol expectations: just run a notification method corresponding to the event, and you are done - all subscribers will be notified.
No need to put in production a Message bus, or a centralized system. Using callbacks, you will your outer layers (e.g. Application or Presentation layers) be cleanly notified by the Domain Services, without any waste of resource, and without potential bottleneck. Each node of your system will communicate directly with its subscriber, from a pure interface method call, as if it was a local process. See Publish-subscribe for events for implementation details.
In practice, the callbacks may be propagated from the Domain layer to the Application or Presentation layers, which may also have their own callbacks definitions, using not Domain objects, but their own DTOs. Marshalling an event will be as easy as writing a class implementing an I*Callback interface as defined in the Domain, translating its parameters into the DTO types are defined for the outer Application or Presentation services.
On the server side, you may even define the callbacks in the very same process, without the WebSockets overhead, but calling directly the Domain services, whose interface type will be defined at Domain level, but the class type implemented at Infrastructure level:
The application layer may be able to run directly the Domain code in its own service/daemon, calling the actual implementation at Infrastructure level, with a single straight WebSockets transmission - the DDD's Adapters types being pure Delphi classes, running in process, with no overhead.
Or, you may gather Domain Services in some specific stand-alone daemons, which may be able to cache the events, and/or centralize some process - as a benefit, it may help those services be truly stateless, so the Application Layer may become redundant for better scaling.
Using interface values and call their methods is a natural way of writing callbacks in Delphi code, very close to the VCL/RAD Events you may be used to, but with the benefit of the abstraction of Interfaces, especially SOLID design principles. If your need is to react in real time to some change of the system, they are probably the preferred way.
24.4.5.2. Event Sourcing via Event Oriented Databases
Another new, and popular DDD's Events implementation pattern is to define some kind of event persistence, which will be used as Event Sourcing. Here, we won't rely on explicit messages to transmit the events (as we just proposed via asynchronous interface callbacks), but we will use some state storage in an Event Oriented Persistence, then let subscribers by notified for each state change.
In this pattern, there are not directly any kind of Event defined. The state of the Domain is stored somewhere, then any change of state should be notified to whom has interest for it. Obviously, one potential easy implementation may be via Real-time synchronization, as proposed by the framework.
The Domain services - see e.g. Services or if you Store your Entities in CQRS Repositories - may modify a dedicated TSQLRecord table, which will contain only a small part of the state of the model. For instance, its TSQLRecord fields definition may store only a few evolving values, like the latest order placed, or the price of an item, or the connection state of a peripheral. The main point is to restrict the data stored to its minimum, e.g. this evolving value and the name (or ID) of the object.
Thanks to the framework Real-time synchronization, any client process or service, via its own Slave copy of this TSQLRecordState storage, will be notified asynchronously. This notification will reflect each change of state, and will let the consumer react as expected. One OnNotify event is available, to track each individual change of state, as specified as parameter to TSQLRestServer.RecordVersionSynchronizeSlaveStart.
When using Events as Callbacks, you may miss some events: if the consumer service is off-line, there won't be any event notified. It may be as expected, but may be a huge issue in some cases. On the other hand, the Event Oriented Persistence model will allow the consumers to be safely off-line at some time. Each ORM Slave will have its own copy of the data, then will be able to retrieve all the missed changes of state, when it goes on-line.
This implementation pattern is in fact the base of any true Event Sourcing process. Following this DDD pattern, each node of the system should store the data it needs. The system nodes won't ask for a given information (e.g. "What is the current temperature?"), but will be notified of each temperature change, then store the value, and being able to propagate any incoming events with almost no dependency. The main benefit is that you could add some node to the system, without any prior knowledge of what is already there. Such Events-driven Architecture (EDA) or Domain Event Driven Service Oriented Architecture (D-EDA) may be complex to maintain and debug, once they reach a given size. For instance, some unexpected Event Cascade may happen, when you get a sequence of events triggering other events: you may induce an infinite rebound in the whole system. As a consequence, a "pure Event Driven" system will probably be a wrong idea. Event Sourcing may be introduced for some part of your Domain, where it does make better sense. See http://martinfowler.com/eaaDev/EventCollaboration.html for more material.
As a side benefit, scaling of the whole system may be increased by this pattern. Each Event State storage may be seen as a safe cache of the system state, in the bounded context of a given set of values. When your business logic wonder about this particular state, it may ask this dedicated service, leveraging the main database. You may even consider storing the whole state history in a dedicated Audit Trail for change tracking storage, without impacting the whole system.
Physically, it involves a common n-Tier representation splitting the classical Logic Tier into two layers, i.e. Application layer and Domain Model layer. At logical level, DDD will try to uncouple the Domain Model layer from other layers, so the code itself will rely on interfaces and dependency injection to let the core Domain focus on the business logic, not on implementation details (e.g. persistence or communication).
The RESTful SOA components of our Synopse mORMot framework can therefore define such an Architecture:
Clean Domain-Oriented Architecture of mORMotAs we already stated, the main point of this Clean Architecture is to control coupling, and isolate the Domain core from the outer layers. In Delphi, unit dependencies (as displayed e.g. by our SynProject tool) will be a good testimony of proper objects uncoupling: in the units defining your domain, you may split it between Domain Model and Domain Services (the 2nd using the first, and not vice-versa), and you should never have any dependency to a particular DB unit, just to the framework's core units, i.e. SynCommons.pas and mORMot.pas. Interfaces in practice: dependency injection, stubs and mocks - via Client-Server services via interfaces or at ORM initialization level - will ensure that your code is uncoupled from any low-level technical dependency. It will also allow proper testing of your application workflows, e.g. stubbing the database if necessary.
In fact, since a Service-Oriented Architecture (SOA) tends to ensure that services comprise unassociated, loosely coupled units of functionality that have no calls to each other embedded in them, we may define two levels of services, implemented by two interface factories, using their own hosting and communication:
One set of services at Application layer, to define the uncoupled contracts available from Client applications;
One set of services at Domain Model layer, which will allow all involved domains to communicate with each other, without exposing it to the remote clients.
Therefore, those layers could be also implemented as such:
Alternate Domain-Oriented Architecture of mORMotIn order to provide the better scaling of the server side, cache can be easily implemented at every level, and hosting can be tuned in order to provide the best response time possible: one central server, several dedicated servers for application, domain and persistence layers...
Due to the SOLID design of mORMot - see SOLID design principles - you can use as many Client-Server services layers as needed in the same architecture (i.e. a Server can be a Client of other processes), in order to fit your project needs, and let it evolve from the simplest architecture to a full scalable Domain-Driven design.
25. Testing and logging
Adopt a mORMotSince we covered most architectural and technical aspects of the framework, it is time to put the last missing bricks to the building, meaning testing and logging.
25.1. Automated testing
You know that testing is (almost) everything if you want to avoid regression problems in your application.
How can you be confident that any change made to your software code won't create any error in other part of the software?
Automated unit testing is a good candidate for avoiding any serious regression.
And even better, testing-driven coding can be encouraged:
Write a void implementation of a feature, that is code the interface with no implementation;
Write a test code;
Launch the test - it must fail;
Implement the feature;
Launch the test - it must pass;
Add some features, and repeat all previous tests every time you add a new feature.
It could sounds like a waste of time, but such coding improve your code quality a lot, and, at least, it help you write and optimize every implementation feature.
The framework has been implemented using this approach, and provide all the tools to write tests. In addition to what other Delphi frameworks offer (e.g. DUnit / DUnitX), the SynTests.pas unit is very much integrated with other elements of the framework (like logging), is cross-platform and cross-compiler, and provides a complete stubbing / mocking mechanism to cover Interfaces in practice: dependency injection, stubs and mocks.
25.1.1. Involved classes in Unitary testing
The SynTests.pas unit defines two classes (both inheriting from TSynTest), implementing a complete Unitary testing mechanism similar to DUnit, with less code overhead, and direct interface with the framework units and requirements (UTF-8 ready, code compilation from Delphi 6 up to the latest available Delphi version and FPC, no external dependency).
The following diagram defines this class hierarchy:
TSynTest classes hierarchyThe main usable class types are:
TSynTestCase, which is a class implementing a test case: individual tests are written in the published methods of this class;
TSynTests, which is used to run a suit of test cases, as defined with the previous class.
In order to define tests, some TSynTestCase children must be defined, and will be launched by a TSynTests instance to perform all the tests. A text report is created on the current console, providing statistics and Pass/Fail.
25.1.2. First steps in testing
Here are the functions we want to test:
function Add(A,B: double): Double; overload;
begin
result := A+B;
end; function Add(A,B: integer): integer; overload;
begin
result := A+B;
end; function Multiply(A,B: double): Double; overload;
begin
result := A*B;
end; function Multiply(A,B: integer): integer; overload;
begin
result := A*B;
end;
So we create three classes one for the whole test suit, one for testing addition, one for testing multiplication:
The trick is to create published methods, each containing some tests to process.
Here is how one of these test methods are implemented (I let you guess the others):
procedure TTestNumbersAdding.TestDoubleAdd;
var A,B: double;
i: integer;
beginfor i := 1 to 1000 dobegin
A := Random;
B := Random;
CheckSame(A+B,Adding(A,B));
end;
end;
The CheckSame() is necessary because of floating-point precision problem, we can't trust plain = operator (i.e. Check(A+B=Adding(A,B)) will fail because of rounding problems).
And here is the test case implementation:
procedure TTestSuit.MyTestSuit;
begin
AddCase([TTestNumbersAdding,TTestNumbersMultiplying]);
end;
And the main program (this .dpr is expected to be available as a console program):
with TTestSuit.Create dotry
ToConsole := @Output; // so we will see something on screen
Run;
readln;
finally
Free;
end;
Just run this program, and you'll get:
Suit
------
1. My test suit
1.1. Numbers adding:
- Test integer add: 1,000 assertions passed 92us
- Test double add: 1,000 assertions passed 125us
Total failed: 0 / 2,000 - Numbers adding PASSED 360us
1.2. Numbers multiplying:
- Test integer multiply: 1,000 assertions passed 73us
- Test double multiply: 1,000 assertions passed 117us
Total failed: 0 / 2,000 - Numbers multiplying PASSED 324us
Generated with: Delphi 7 compiler
Time elapsed for all tests: 1.51ms
Tests performed at 25/03/2014 10:59:33
Total assertions failed for all test suits: 0 / 4,000
All tests passed successfully.
You can see that all text on screen was created by "UnCamelCasing" the method names (thanks to our good old Camel), and that the test suit just follows the order defined when registering the classes. Each method has its own timing, which is pretty convenient to track performance regressions.
This test program has been uploaded in the SQLite3\Sample\07 - SynTest folder of the Source Code Repository.
25.1.3. Framework test coverage
The SAD # DI-2.2.2 defines all classes released with the framework source code, which covers all core aspects of the framework. Global testing coverage is good, excellent for core components (more than 25,000,000 individual checks are performed for revision 1.18), but there is still some User-Interface related tests to be written.
Before any release all unitary regression tests are performed with the following compilers:
Delphi 7, with and without our Enhanced Run Time Library;
Delphi 2007;
Delphi 2010 (we assume that if it works with Delphi 2010, it will work with Delphi 2009, with the exception of generic compilation);
Delphi XE4;
Delphi XE7;
Delphi XE8;
Delphi 10.3 Rio;
Delphi 10.4 Sidney;
CrossKylix 3.0;
FPC 3.x - preferred is 3.2 fixes.
Target platforms are Win32 and Win64 for Delphi and FPC, plus Linux 32/64 for FPC and CrossKylix.
Then all sample source code (including the Main Demo and SynDBExplorer sophisticated tools) are compiled, and user-level testing is performed against those applications.
You can find in the compil.bat and compilpil.bat files of our source code repository how incremental builds and tests are performed.
25.2. Enhanced logging
A logging mechanism is integrated with cross-cutting features of the framework. It includes stack trace exception and such, just like MadExcept, using .map file content to retrieve debugging information from the source code.
Here are some of its features:
Logging with a set of levels, not only a level scale;
Fast, low execution overhead;
Can load .map file symbols to be displayed in logging (i.e. source code file name and line numbers are logged instead of a hexadecimal value);
Compression of .map into binary .mab (900 KB -> 70 KB);
Inclusion of the .map/.mab into the .exe, with very slow size increase;
Exception logging (Delphi or low-level exceptions) with unit names and line numbers;
Optional stack trace with units and line numbers;
Methods or procedure recursive tracing, with Enter and auto-Leave (using a fake interface instance);
High resolution time stamps, for customer-side profiling of the application execution;
Optional rotation when main log reaches a specified size, with compression of the rotated logs;
Integrated log archival (in .zip or any other format, including our .synlz);
Optional colored echo to a console window, for interactive debugging;
Fast log viewer tool available, including thread filtering and customer-side execution profiling;
Optional remote logging via HTTP - the log viewer can be used as server;
Optional events transmission to a UDP syslog server.
25.2.1. Setup logging
Logging is defined mainly by a per-class approach. You usually define your logging expectations by using a TSynLog class, and setting its Family property. Note that it is perfectly feasible to use you own TSynLog class instance, with its own TSynLog family settings, injected at the constructor level; but in mORMot, we usually use the per-class approach, via TSynLog, TSQLLog, SynDBLog and SQLite3Log - see below.
For sample code (and the associated log viewer tool), see "11 - Exception logging" folder in "Sqlite3\Samples".
In short, you can add logging to your program, just by using the TSynLog class, as such:
This line will log the Stats.DebugMessage text, with a sllInfo notification level. See the description of all Log() overloaded methods of the ISynLog interface, to find out how your project can easily log events.
First of all, you need to define your logging setup via code:
The main setting here is TSynLog.Family.Level := ... which defines which levels are to be logged. That is, if sllInfo is part of TSynLog.Family.Level, any TSynLog.Add.Log(sllInfo,...) command will log the corresponding content - otherwise, it will be a no-operation. LOG_VERBOSE is a constant setting all levels at once.
You have several debugging levels available, and even 4 custom types:
sllTrace will log low-level step by step debugging information;
sllWarning will log unexpected values (not an error);
sllError will log errors;
sllEnter will log every method start;
sllLeave will log every method quit;
sllLastError will log the GetLastError OS message;
sllException will log all exception raised - available since Windows XP;
sllExceptionOS will log all OS low-level exceptions (EDivByZero, ERangeError, EAccessViolation...);
sllMemory will log memory statistics;
sllStackTrace will log caller's stack trace (it is by default part of TSynLogFamily. LevelStackTrace like sllError, sllException, sllExceptionOS, sllLastError and sllFail);
sllFail was defined for TSynTestsLogged. Failed method, and can be used to log some customer-side assertions (may be notifications, not errors);
sllSQL is dedicated to trace the SQL statements;
sllCache should be used to trace any internal caching mechanism (it is used for instance by our SQL statement caching);
sllResult could trace the SQL results, JSON encoded;
sllDB is dedicated to trace low-level database engine features;
sllHTTP could be used to trace HTTP process;
sllClient/sllServer could be used to trace some Client or Server process;
sllServiceCall/sllServiceReturn to trace some remote service or library;
sllUserAuth to trace user authentication (e.g. for individual requests);
sllCustom1..sllCustom4 items can be used for any purpose by your programs;
sllNewRun will be written when a process re-opens a rotated log.
Logging is not using directly a TSynLogInfo level, but the following set:
/// used to define a logging level// - i.e. a combination of none or several logging event// - e.g. use LOG_VERBOSE constant to log all eventsTSynLogInfos = setofTSynLogInfo;
Most logging tools in the wild use a level scale, i.e. with a hierarchy, excluding the lower levels when one is selected.
Our logging classes use a set, and not directly a particular level, so you are able to select which exact events are worth recording. In practice, we found this pattern to make a lot of sense and to be much more efficient for support.
25.2.2. Call trace
The logging mechanism can be used to trace recursive calls. It can use an interface-based mechanism to log when you enter and leave any method:
procedure TMyDB.SQLExecute(const SQL: RawUTF8);
var ILog: ISynLog;
begin
ILog := TSynLogDB.Enter(self,'SQLExecute');
// do some stuff
ILog.Log(sllInfo,'SQL=%',[SQL]);
end; // when you leave the method, it will write the corresponding event to the log
It will be logged as such:
20110325 19325801 + MyDBUnit.TMyDB(004E11F4).SQLExecute
20110325 19325801 info SQL=SELECT * FROM Table;
20110325 19325801 -
Note that by default you have human-readable time and date written to the log, but it is also possible to replace this timing with high-resolution timestamps. With this, you'll be able to profile your application with data coming from the customer side, on its real computer. Via the Enter method (and its auto-Leave feature), you have all information needed for this.
25.2.3. Including symbol definitions
In the above logging content, the method name is set in the code (as 'SQLExecute'). But if the logger class is able to find a .map file associated to the .exe, the logging mechanism is able to read this symbol information, and write the exact line number of the event.
By default, the .map file information is not generated by the compiler. To force its creation, you must ensure the {$D+} compiler directive is set in every unit (which is the case by default, unless you set {$D-} in the source), and the "Detailed Map File" option selected in the Project > Options > Linker page of the Delphi IDE.
In the following log entries, you'll see both high-resolution time stamp, and the entering and leaving of a TTestCompression.TestLog method traced with no additional code (with accurate line numbers, extracted from the .map content):
There is already a dedicated TSynLogFile class able to read the .log file, and recognize its content.
The first time the .map file is read, a .mab file is created, and will contain all symbol information needed. You can send the .mab file with the .exe to your client, or even embed its content to the .exe (see the Map2Mab.dpr sample file located in the Samples\11 - Exception logging\ folder).
This .mab file is very optimized: for instance, a .map of 927,984 bytes compresses into a 71,943 .mab file.
25.2.4. Exception handling
Of course, this logging mechanism is able to intercept the raise of exceptions, including the worse (e.g. EAccessViolation), to be logged automatically in the log file, as such:
The TSynLogInfo logging level makes a difference between high-level Delphi exceptions (sllException) and lowest-level OS exceptions (sllExceptionOS) like EAccessViolation.
You can specify some Exception class to be ignored, by adding them to Family.ExceptionIgnore internal list. It could make sense to add this setting, if your code often triggers some non-breaking exceptions, e.g. with StrToInt():
If your Delphi code executes some .Net managed code (e.g. exposed via some COM wrapper components), the unit is able to recognize most un-handled .Net exceptions, and log them with their original C# class name (for instance, EOleSysError 80004003 will be recorded as a much more user-friendly "[.NET/CLR unhandled ArgumentNullException]" message.
You can set the following global variable to assign a customized callback, and be able to customize the logging content associated to any exception:
An easier possibility is to inherit your custom exception class from ESynException, and override its unique virtual method:
/// generic parent class of all custom Exception types of this unitESynException = class(Exception)
public/// can be used to customize how the exception is logged// - this default implementation will call the DefaultSynLogExceptionToStr()// callback or TSynLogExceptionToStrCustom, if defined// - override this method to provide a custom logging content// - should return TRUE if Context.EAddr and Stack trace is not to be// written (i.e. as for any TSynLogExceptionToStr callback)function CustomLog(WR: TTextWriter; const Context: TSynLogExceptionContext): boolean; virtual;end;
The TSQLLog class (using the enhanced RTTI methods defined in mORMot.pas unit) is even able to serialize TSQLRecord, TPersistent, TList and TCollection instances as JSON, or any other class instance, after call to TJSONSerializer. RegisterCustomSerializer.
0000000000001172 + 000E9F67 SynSelfTests.TestPeopleProc (784)
000000000000171B info {"TSQLRecordPeople(00AB92E0)":{"ID":16,"FirstName":"Louis","LastName":"Croivébaton","Data":"","YearOfBirth":1754,"YearOfDeath":1793}}
0000000000001731 -
25.2.6. Multi-threaded applications
You can define several log files per process, and even a per-thread log file, if needed (it could be sometimes handy, for instance on a server running the same logic in parallel in several threads).
The logging settings are made at the logging class level. Each logging class (inheriting from TSynLog) has its own TSynLogFamily instance, which is to be used to customize the logging class level. Then you can have several instances of the individual TSynLog classes, each class sharing the settings of the TSynLogFamily.
You can therefore initialize the "family" settings before using logging, like in this code which will force to log all levels (LOG_VERBOSE), and create a per-thread log file, and write the .log content not in the .exe folder, but in a custom directory:
If you specifies PerThreadLog := ptIdentifiedInOnFile for the family, a new column will be added for each log row, with the corresponding ThreadID - the supplied LogView tool will handle it as expected. This can be very useful for a multi-threaded server process, e.g. as implement with mORMot's Client-Server classes Client-Server process.
25.2.7. Log to the console
For debugging purposes, it could be very handy to output the logging content to a console window. It enables interactive debugging of a Client-Server process, for instance: you can interact with the Client, then look in real time at the server console window, and inspect which requests are processed, without the need to open the log file.
The EchoToConsole property enable you to select which events are to be echoed on the console (perhaps you expect only errors to appear, for instance).
Depending on the events, colors will be used to write the corresponding information. Errors will be displayed as light red, for instance.
Note that this echoing process slow down the logging process a lot, since it is currently implemented in a blocking mode, and writing to the console under Windows is much slower than writing to a file. This feature is therefore disabled by default, and not to be enabled on a production server, but only to make interactive debugging easier.
25.2.8. Remote logging
By default, TSynLog writes its activity to a local file, and/or to the console. The log file can be transmitted later on (once compressed) to support, for further review and debugging. But sometimes, it may be handy to see the logging in real-time, on a remote computer.
You can enable such remote monitoring for a given TSynLog class, by adding the mORMotHTTPClient.pas unit in your use clause, then calling the following constructor:
This command will let any SQLite3Log event be sent to a remote server running at http://192.168.1.15:8091/LogService/RemoteLog - in fact this should be a mORMot server, but may be any REST server, able to answer to a PUT command sent to this URI.
A TSQLHttpClient instance will be created, and will be managed by the SQLite3Log instance. It will be released when the application will be closed, or when the SQLite3Log.Family.EchoRemoteStop method will be called.
In practice, our Log View tool - see below - is able to run as a compatible remote server. Execute the tool, set the expected Server Root name ('LogService' by default), and the expected Server Port (8091 by default), then click on the "Server Launch" button. The Log View tool will now display in real time all incoming events, search into their content, and allow to save all received events into a regular .log or .synlz file, for further archiving and study. Note that since the Log View tool will run a http.sys based server - see High-performance http.sys server - you may have to run once the tool with administrator rights, to register the Server Root / Server Port combination for binding.
Implementation of this remote logging has been tuned on both client and server side. On client side, log events are gathered and sent in a dedicated background thread: if a lot of events are generated, they will be transferred in chunks of several rows, to minimize resource and bandwidth. On server side, incoming events are stored in memory, and indexed on the fly, with a periodic refresh rate of 500 ms: even a very active client logger will just let the Log View tool be responsive and efficient. Thanks to the nature of the http.sys based server, several Server Root URI can be accessed in parallel with several Log View tool instance, on the same HTTP port: it will ease the IT policy of your network, since a single forwarded port will be able to handle several incoming connections.
See the "RemoteLoggingTest.dpr" sample from "11 - Exception logging", in conjunction with the LogView.dpr tool available in the same folder, for a running example of remote logging.
Note that our cross-platform clients - see Cross-Platform clients - are able to log to a remote server, with the same exact format as used by our TSynLog class.
25.2.9. Log to third-party libraries
Our TSynLog class was designed to write its information to a file, and optionally to the console or a remote log server (as we just saw). In fact, TSynLog is extensively used by the mORMot framework to provide various levels of details on what happens behind the scene: it is great for debugging purposes.
It may be convenient to let TSynLog work with any third party logging applications such as CodeSite or SmartInspect, or any proprietary solution. As a result, mORMot logs can be mixed with existing application logs.
You can define the TSynLogFamily.EchoCustom property to specify a simple event to be triggered for each log operation: the application can then decide to log to a third party logger application.
Note that there is also the TSynLogFamily.NoFile property, which allows to disable completely the built-in file logging mechanism.
For instance, you may write:
procedure TMyClass.Echo(Sender: TTextWriter; Level: TSynLogInfo; const Text: RawUTF8);
beginif Level inLOG_STACKTRACEthen// filter only errors
writeln(Text); // could be any third-party loggerend;
...
withTSQLLog.Family dobegin
Level := LOG_VERBOSE;
// EchoToConsole := LOG_VERBOSE; // log all events to the consoleEchoCustom := aMyClass.Echo; // register third-party loggerNoFile := true; // ensure TSynLog won't use the default log fileend;
A process similar to TSynLogFile.ProcessOneLine() could then parse the incoming Text value, if needed.
25.2.10. Automated log archival
Log archives can be created with the following settings:
with TSynLogDB.Family dobegin
(...)
OnArchive := EventArchiveZip;
ArchivePath := '\\Remote\WKS2302\Archive\Logs'; // or any pathend;
The ArchivePath property can be set to several functions, taking a timeout delay from the ArchiveAfterDays property value:
nil is the default value, and won't do anything: the .log will remain on disk until they will be deleted by hand;
EventArchiveSynLZ to compress the .log file into a proprietary SynLZ format: resulting file name will be located in ArchivePath\log\YYYYMM\*.log.synlz, and the command-line UnSynLz.exe tool (calling FileUnSynLZ function of SynCommons.pas unit) can be used to uncompress it in to plain .log file;
SynZip.EventArchiveZip will archive the .log files in ArchivePath\log\YYYYMM.zip files, grouping every .
SynLZ files are less compressed, but created much faster than .zip files. However, .zip files are more standard, and on a regular application, compression speed won't be an issue for the application.
If both values are > 0, the log file will have a fixed name, without any time-stamp within;
RotateFileSizeKB will define the maximum size of the main uncompressed log file
RotateFileCount will define how many files are kept on disk - note that rotated files are compressed using SynLZ, so compression will be very fast.
Log file rotation is as easy as:
withTSQLLog.Family dobegin
Level := LOG_VERBOSE;
RotateFileCount := 5; // will maintain a set of up to 5 files
RotateFileSizeKB := 20*1024; // rotate by 20 MB logsend;
Such a logging definition will create those files on disk, e.g. for the TestSQL3.dpr regression tests:
TestSQL3.log which will be the latest (current) log file, uncompressed;
TestSQL3.1.synlz to TestSQL3.4.synlz will be the 4 latest log files, after compression. Our Log Viewer tool - see below - is able to uncompress those .synlz files directly.
Note that as soon as you active file rotation, PerThreadLog = ptOneFilePerThread and HighResolutionTimestamp properties will be ignored, since both features expect a single file to exist per TSynLog class.
As an alternative, or in addition to this by-size rotation pattern, you could specify a fixed time of the day to perform the rotation. For instance, the following will perform automatic rotation of the log files, whatever their size, at 23:00 each evening:
withTSQLLog.Family dobegin
Level := LOG_VERBOSE;
RotateFileCount := 5; // will maintain a set of up to 5 files
RotateFileDailyAtHour := 23; // rotate at 11:00 PMend;
If the default behavior - which is to compress all rotated files into .synlz format, and delete the older files - does not fit your needs, you can set a custom event to the TSynLogFamily.OnRotate property, which will take care of the file rotation process.
25.2.12. Integration within tests
Logging is integrated within the unit testing classes, so that any failure will create an entry in the log with the source line, and stack trace:
Since the log files tend to be huge (for instance, if you set the logging for our unitary tests, the 17,000,000 test cases do create a huge log file of about 550 MB), a log viewer was definitively in need.
The log-viewer application is available as source code in the "Samples" folder, in the "11 - Exception logging" sub-folder.
25.2.13.1. Open log files
You can run it with a specified log file on the command line, or use the "Browse" button to browse for a file. That is, you can associate this tool with your .log files, for instance, and you'll open it just by double-clicking on such files.
Note that if the file is not in our TSynLog format, it will still be opened as plain text. You'll be able to browse its content and search within, but all the nice features of our logging won't be available, of course.
It is worth saying that the viewer was designed to be fast. In fact, it takes no time to open any log file. For instance, a 390 MB log file is opened in less than one second on my laptop. Under Windows Seven, it takes more time to display the "Open file" dialog window than reading and indexing the 390 MB content. It uses internally memory mapped files and optimized data structures to access to the data as fast as possible - see TSynLogFile class.
25.2.13.2. Log browser
The screen is divided into three main spaces:
On the left side, the panel of commands;
On the right side, the log events list;
On the middle, an optional list of method calls, and another list of threads (not shown by default).
The command panel allows to Browse your disk for a .log file. This button is a toggle of an optional Drive / Directory / File panel on the leftmost side of the tool. When a .log / .synlz / .txt file is selected, its content is immediately displayed. You can specify a directory name as a parameter of the tool (e.g. in a .lnk desktop link), which will let the viewer be opened in "Browse" mode, starting with the specified folder.
A button gives access to the global Stats about its content (customer-side hardware and software running configuration, general numbers about the log), and even ask for a source code line number and unit name from a hexadecimal address available in the log, by browsing for the corresponding .map file (could be handy if you did not deliver the .map content within your main executable - which you should have to).
Just below the "Browse" button, there is an edit field available, with a ? button. Enter any text within this edit field, and it will be searched within the log events list. Search is case-insensitive, and was designed to be fast. Clicking on the ? button (or pressing the F3 key) allows to repeat the last search.
In the very same left panel, you can see all existing events, with its own color and an associated check-box. Note that only events really encountered in the .log file appear in this list, so its content will change between log files. By selecting / un-selecting a check-box, the corresponding events will be instantaneously displayed / or not on the right side list of events. You can right click on the events check-box list to select a predefined set of events.
The right colored event list follows the events appended to the log, by time order. When you click on an event, its full line content is displayed at the bottom on the screen, in a memo.
Having all SQL / NoSQL and Client-Server events traced in the log is definitively a huge benefit for customer support and bug tracking.
25.2.13.3. Customer-side profiler
One distinctive feature of the TSynLog logging class is that it is able to map methods or functions entering/leaving (using the Enter method), and trace this into the logs. The corresponding timing is also written within the "Leave" event, and allows application profiling from the customer side. Most of the time, profiling an application is done during the testing, with a test environment and database. But this is not, and will never reproduce the exact nature of the customer use: for instance, hardware is not the same (network, memory, CPU), nor the software (Operating System version, [anti-]virus installed)... By enabling customer-side method profiling, the log will contain all relevant information. Those events are named "Enter" / "Leave" in the command panel check-box list, and written as + and - in the right-sided event list.
The "Methods profiler" options allow to display the middle optional method calls list. Several sort order are available: by name (alphabetical sort), by occurrence (in running order, i.e. in the same order than in the event log), by time (the full time corresponding to this method, i.e. the time written within the "Leave" event), and by proper time (i.e. excluding all time spent in the nested methods).
The "Merge method calls" check-box allows to regroup all identical method calls, according to their name. In fact, most methods are not called once, but multiple time. And this is the accumulated time spent in the method which is the main argument for code profiling.
I'm quite sure that the first time you'll use this profiling feature on a huge existing application, you'll find out some bottlenecks you will have never thought about before.
25.2.13.4. Per-thread inspection
If the TSynLog family has specified PerThreadLog := ptIdentifiedInOnFile property, a new column will be added for each log row, with the corresponding ThreadID of the logged action.
The log-viewer application will identify this column, and show a "Thread" group below the left-side commands. It will allow to go to the next thread, or toggle the optional Thread view list. By checking / un-checking any thread of this list, you are able to inspect the execution log for a given process, very easily. A right-click on this thread list will display a pop-up menu, allowing to select all threads or no thread in one command.
25.2.13.5. Server for remote logging
As was stated above, Remote logging can use our Log View tool as server and real-time viewer for any remote client, either using TSynLog, or any cross-platform client - see Cross-Platform clients.
Using a remote logging is specially useful from mobile applications (written with Delphi / FireMonkey or with Smart Mobile Studio / AJAX). Our viewer tool allows efficient live debugging of such platforms.
25.2.14. Framework log integration
The framework makes an extensive use of the logging features implemented in the SynLog.pas unit - see Enhanced logging.
In its current implementation, the framework is able to log on request:
Any exceptions triggered during process, via sllException and sllExceptionOS levels;
Client and server RESTful URL methods via sllClient and sllServer levels;
SQL executed statements in the SQLite3 engine via the sllSQL level;
JSON results when retrieved from the SQLite3 engine via the sllResult level;
Main errors triggered during process via sllError level;
Security User authentication and session management via sllUserAuth;
Some additional low-level information via sllDebug, sllWarning and sllInfo levels.
Those levels are available via the TSQLLog class, inheriting from TSynLog, as defined in mORMot.pas.
Three main TSynLogClass global variables are defined in order to use the same logging family for the whole framework. Since mORMot units are decoupled (e.g. Database or ORM/SOA), several variables have been defined, as such:
SynDBLog for all SynDB* units, i.e. all generic database code;
SQLite3Log for all mORMot* units, i.e. all ORM related code;
You can set your own class type to SynDBLog / SynSQLite3Log if you expect separated logging.
As a result, if you execute the following statement at the beginning of TestSQL3.dpr, regression tests will produce some logging, and resulting into more than 740 MB of log file content, if executed:
TSynLogTestLog := TSQLLog; // share the same log file with whole mORMotwithTSQLLog.Family dobegin
Level := LOG_VERBOSE;
HighResolutionTimestamp := true;
PerThreadLog := ptIdentifiedInOnFile;
end;
Creating so much log content won't increase the processing time much. On a recent laptop, whole regression tests process will spent only 2 seconds to write the additional logging, which is the bottleneck of the hard disk writing.
If logging is turned off, there is no speed penalty noticeable.
Logging could be very handy for interactive debug of a client application. Since our TSynLog / TSQLLog class feature an optional ouput to a console, you are able to see in real-time the incoming requests - see for instance how 14 - Interface based services\Project14ServerHttp.pas sample is initialized:
begin// define the log levelwithTSQLLog.Family dobegin
Level := LOG_VERBOSE;
EchoToConsole := LOG_VERBOSE; // log all events to the consoleend;
// create a Data Model
aModel := TSQLModel.Create([],ROOT_NAME);
(...)
Of course, this interactive console refresh slows down the process a lot. It is therefore to be defined only for debugging purposes, not on production.
26. Source code
Adopt a mORMot
26.1. License
26.1.1. Three Licenses Model
The framework source code is licensed under a disjunctive three-license giving the user the choice of one of the three following sets of free software/open source licensing terms:
Mozilla Public License, version 1.1 or later (MPL);
GNU General Public License, version 2.0 or later (GPL);
GNU Lesser General Public License, version 2.1 or later (LGPL), with linking exception of the FPC modified LGPL.
FPC modified LGPL is the Library GNU General Public License with the following modification: As a special exception of the LGPL, the copyright holders of this library give you permission to link this library with independent modules to produce an executable, regardless of the license terms of these independent modules, and to copy and distribute the resulting executable under terms of your choice, provided that you also meet, for each linked independent module, the terms and conditions of the license of that module. An independent module is a module which is not derived from or based on this library. If you modify this library, you may extend this exception to your version of the library, but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version.
This allows the use of the framework code in a wide variety of software projects, while still maintaining intellectual rights on library code.
26.1.2. Publish modifications and credit for the library
In all cases, any modification made to this source code should be published by any mean (e.g. a download link), even in case of MPL. If you need any additional feature, use the forums and we may introduce a patch to the main framework trunk.
You do not have to pay any fee for using our MPL/GPL/LGPL libraries.
But please do not forget to put somewhere in your credit window or documentation, a link to https://synopse.info if you use any of the units published under this tri-license.
For instance, if you select the MPL license, here are the requirements:
You have to publish any modified unit (e.g. SynTaskDialog.pas) in a public web site (e.g. http://SoftwareCompany.com/MPL), with a description of applied modifications, and no removal of the original license header in source code;
You make appear some notice available in the program (About box, documentation, online help), stating e.g. This software uses some third-party code of the Synopse mORMot framework (C) 2022 Arnaud Bouchez - https://synopse.info - under Mozilla Public License 1.1; modified source code is available at http://SoftwareCompany.com/MPL.
26.1.3. Derivate Open Source works
If you want to include part of the framework source code in your own open-source project, you may publish it with a comment similar to this one (as included in the great DelphiWebScript project by Eric Grange - http://code.google.com/p/dwscript ):
{
Will serve static content and DWS dynamic content via http.sys
kernel mode high-performance HTTP server (available since XP SP2).
See http://blog.synopse.info/post/2011/03/11/HTTP-server-using-fast-http.sys-kernel-mode-server
WARNING: you need to first register the server URI and port to the http.sys stack.
That is, run the application at least once as administrator.
Sample based on official mORMot's sample
"SQLite3\Samples\09 - HttpApi web server\HttpApiServer.dpr"
Synopse mORMot framework. Copyright (C) 2022 Arnaud Bouchez
Synopse Informatique - https://synopse.info
Original tri-license: MPL 1.1/GPL 2.0/LGPL 2.1
You will need at least the following files from mORMot framework
to be available in your project path:
- SynCommons.pas
- Synopse.inc
- SynLZ.pas
- SynZip.pas
- SynCrtSock.pas
- SynWinWock.pas
https://synopse.info/fossil/wiki?name=Downloads
}
Note that this documentation is under GPL license only, as stated in this document front page.
26.1.4. Commercial licenses
Even though our libraries are Open Source with permissive licenses, some users want to obtain a license anyway. For instance, you may want to hold a tangible legal document as evidence that you have the legal right to use and distribute your software containing our library code, or, more likely, your legal department tells you that you have to purchase a license.
If you feel like you really have to purchase a license for our libraries, Synopse, the company that employs the architect and principal developer of the libray, will sell you one. Please contact us directly for a contract proposal.
26.2. Availability
As a true Open Source project, all source code of the framework is available, and latest version can be retrieved from our online repository at https://synopse.info/fossil
The source has been commented following the scheme used by our SynProject documentation tool. That is all interface definition of the units have special comments, which were extracted then incorporated into this Software Architecture Design (SAD) document, in the following pages.
26.2.1. Obtaining the Source Code
Each official release of the framework is available in a dedicated SynopseSQLite3.zip archive from the official https://synopse.info web site, but you may want to use the latest version available.
As an alternative, you can manually obtain a .zip archive containing a snapshot of the latest version of the whole source code tree directly from this repository.
Log in as anonymous. The password is shown on screen. Just click on the "Fill out captcha" button then on the "Login" button. The reason for requiring this login is to prevent spiders from walking the entire website, downloading ZIP archives of every historical version, and thereby soaking up all our bandwidth.
Click on the Timeline or Leaves link at the top of the page. Preferred way is Leaves which will give you the latest available version.
Select a version of the source code you want to download: a version is identified by an hexadecimal link (e.g. 6b684fb2). Note that you must successfully log in as "anonymous" in steps 1-3 above in order to see the link to the detailed version information.
Finally, click on the "Zip Archive" link, available at the end of the "Overview" header, right ahead to the "Other Links" title. This link will build a .zip archive of the complete source code and download it to your browser.
26.2.2. Expected compilation platform
The framework source code tree will compile and is tested for the following platforms:
Delphi 6 up to the latest Delphi compiler and IDE version, with FreePascal Compiler (FPC) 3.x and Lazarus support;
Server side on Windows 32-bit and 64-bit platforms (FPC or Delphi XE2 and up expected when targeting Win64);
Linux 32-bit and 64-bit platform for servers using the FPC 3.2 fixes branch - now stable and tested in production since years (especially Debian/Ubuntu on x86_64);
VCL client on Win32/Win64 - GUI may be compiled optionally with third-party non Open-Source TMS Components, instead of default VCL components - see http://www.tmssoftware.com/site/tmspack.asp
If you want to compile mORMot unit into packages, to avoid an obfuscated [DCC Error] E2201 Need imported data reference ($G) to access 'VarCopyProc' error at compilation, you should defined the USEPACKAGES conditional in your project's options. Open SynCommons.inc for a description of this conditional, and all over definitions global to all mORMot units - see SynCommons unit. To avoid related E1025 Unsupported language feature: 'Object' compilation error, you should probably also set "Generate DCUs only" in project's options "C/C++ output file generator".
The framework source code implementation and design tried to be as cross-platform and cross-compiler as possible, since the beginning. It is a lot of work to maintain compatibility towards so many tools and platforms, but we think it is always worth it - especially if you try not depend on Delphi only, which as shown some backward compatibility issues during its lifetime.
For HTML5 and Mobile clients, our main platform is Smart Mobile Studio, which is a great combination of ease of use, a powerful SmartPascal dialect, small applications (much smaller than FMX), with potential packaging as native iOS or Android applications (via PhoneGap).
The latest versions of the FreePascal Compiler together with its great Lazarus IDE, are now very stable and easy to work with. We don't support CodeTyphon, since we found some licensing issue with some part of it (e.g. Orca GUI library origin is doubtful). So we recommend using fpcupdeluxe - see below - which is maintained by Alfred, a mORMot contributor. This is amazing to build the whole set of compilers and IDE, with a lot of components, for several platforms (this is a cross-platform project), just from the sources. I like Lazarus stability and speed much more than Delphi (did you ever tried to browse and debug included$I ... files in the Delphi IDE? with Lazarus, it is painless), even if the compiler is slower than Delphi's, and if the debugger is less integrated and even more unstable than Delphi's under Windows (yes, it is possible!). At least, it works, and the Lazarus IDE is small and efficient. Official Linux support is available for mORMot servers, with full features in the FPC 3.2 branch - we use it on production with Linux 64-bit since years.
26.2.3. SQLite3 static linking for Delphi and FPC
Preliminary note: if you retrieved the source code from https://github.com/synopse/mORMot you will have all the needed .obj/.o static files available in the expected folders. Just ignore this chapter.
In order to maintain our https://synopse.info/fossil/timeline source code repository in a decent size, we excluded the sqlite3.obj/.o storage in it, but provide the full source code of the SQlite3 engine in a custom sqlite3.c file, ready to be compiled with all conditional defined as expected by SynSQlite3Static.pas. You need to add the official SQlite3 amalgamation file from https://www.sqlite.org/download.html and put its content into a SQLite3\amalgamation sub-folder, for proper compilation. Our custom sqlite3.c file will add encryption feature to the engine. Also look into SynSQlite3Static.pas comments if there is any manual patch needed for proper compilation of the amalgamation sourece.
Of course, you are not required to do the compilation: sqlite3.obj (for Delphi Win32) and sqlite3.o files (for Delphi Win64) are available for Delphi, as a separated download, from https://synopse.info/files/sqlite3obj.7z
For Delphi, please download the latest compiled version of these .obj/.o files from this link. You can also use the supplied c.bat and c64.bat files to compile from the original sqlite3.c file available in the repository, if you have the bcc32/bcc64 C command-line compiler(s) installed.
For Win32, the free version works and was used to create the .obj file, i.e. C++Builder Compiler (bcc compiler) free download - as available from Embarcadero web site.
For native Windows64-bit applications (since Delphi XE2), a sqlilte3.o static file is also available from the same archive. If you need an external dynamic .dll for Win64, since there is no official SQLite3 download for Win64 yet, you can use the one we supply at https://synopse.info/files/SQLite3-64.7z
For FPC, you need to download static .o files from https://synopse.info/files/sqlite3fpc.7z then uncompress the embedded static folder and its sub-folders at the mORMot root folder (i.e. where Synopse.inc and SynCommons.pas stay). If you retrieved the source code from our GitHub repository at https://github.com/synopse/mORMot you already got the static sub-folder as expected by the framework. Those static files have been patched to support optional encryption of the SQLite3 database file. Then enable the FPCSQLITE3STATIC conditional in your project, or directly modify Synopse.inc to include it, so that those .o files will be statically linked to the executable.
You could also compile the static libraries from the sqlite3.c source, to run with FPC - do not forget to enable the FPCSQLITE3STATIC conditional in this case also. Under Windows, ensure the MinGW compiler is installed, then execute c-fpcmingw.bat from the SQLite3 folder. It will create the sqlite3.o and sqlite3fts.o files, as expected by FPC. Under Linux, Use the c-fpcgcclin.sh bash script.
26.2.4. SpiderMonkey library
To enable JavaScript support in mORmot, we rely on our version of the SpiderMonkey library. See Scripting Engine.
Main testing program of the Synopse mORMot framework
TestSQL3Register.dpr
Run as administrator for TestSQL3 to use http.sys on Vista/Seven
c.bat sqlite3.c
Source code of the SQLite3 embedded Database engine
26.2.5.4. CrossPlatform folder
In a CrossPlatform folder, some source code is available, to be used when creating mORMot clients for compilers or platforms not supported by the main branch:
File
Description
SynCrossPlatform.inc
Includes cross-platform and cross-compiler conditionals
To setup mORMot for Delphi 6 up to the latest Delphi version, you have two ways: either download the framework from archives, or clone our GitHub repository at https://github.com/synopse/mORMot
26.3.1. Manual download
Download and uncompress the framework archives, including all sub-folders, into a local directory of your computer (for instance, D:\Dev\mORMot).
d:
cd Dev
git clone https://github.com/synopse/mORMot.git
It will create a d:\Dev\mORMot local folder, which will eventually be re-synchronized with the official sources. Advantage of cloning our GitHub repository is that it contains binaries for static linking, (SQLite3 and FPC specific), in a single step.
Just take care that if you downloaded some other library from Synopse (e.g. from https://github.com/synopse/SynPDF or https://github.com/synopse/dmustache), you should better use the main https://github.com/synopse/mORMot only, which contains other projects, to avoid any version confusion. We have seen a lot of installation problems reported in our forum due to source code file collision from several repositories, not in the same revision.
26.3.3. Setup the Delphi IDE
To let your IDE know about mORMot source code, add the following paths to your Delphi IDE (in Tools/Environment/Library or Tools/Options/Language/Delphi Options/Library menu depending on your Delphi version):
For any cross-platform client, do not forget to include the D:\Dev\mORMot\CrossPlatform to the Delphi or FreePascal IDE paths of the corresponding targets. For Smart Mobile Studio, execute CopySynCrossPlatformUnits.bat to set the needed units in the IDE repository.
Note that before Delphi 2006, you will need to download and install FastMM4 heap memory manager - from http://sourceforge.net/projects/fastmm or from the D:\Dev\mORMot\RTL7 sub folder of our repository - for some samples to work (without it, mORMot units will work, but will be slower). Starting with Delphi 2006, FastMM4 is already included within the system RTL, so you do not need to download it.
Open the TestSQL3.dpr program from the SQLite3 sub-folder. You should be able to compile it and run all regression tests on your computer. If you want to run the tests with the fast http.sys kernel-based HTTP server, you'll need to compile and run (as administrator) TestSQL3Register.dpr once before launching TestSQL3.dpr.
Then open the *.dpr files, as available in the SQLite3\Samples sub-folder. You should be able to compile all sample programs, including SynFile.dpr in the MainDemo folder.
You can use the FreePascal Compiler (FPC) to (cross-)compile the mORMot framework source code, targeting the following CPU and OS combinations:
i386-win32
x86_64-win64
i386-linux
x86_64-linux
i386-freebsd
x86_64-freebsd
i386-darwin
x86_64-darwin
arm-linux
aarch64-linux
32-bit and 64-bit Windows and Linux platforms are the main supported targets, used in production since years. Others may need some enhancements, and you are free to contribute! mORMot has been reported to work on a Raspberry Pi running Linux, thanks to FPC abilities - and with good performance and stability.
Linux is a premium target for cheap and efficient server Hosting. Since mORMot has no dependency, installing a new mORMot server is as easy as copying its executable on a blank Linux host, then run it. No need to install any framework nor runtime. Even the SQLite3 engine will be statically linked on most platforms, as we provide up-to-date binaries in our repository. You could even use diverse operating systems (several Linux or Windows Server versions) in your mORMot servers farm, with minimal system requirements, and updates.
For proper FPC compilation, ensure you have the following settings to your project:
Other unit files (-Fu): D:\Dev\mORMot;D:\Dev\mORMot\SQLite3;D:\Dev\mORMot\SQLite3\DDD\infra
Include files (-Fi): $(ProjOutDir);D:\Dev\mORMot;D:\Dev\mORMot\SQLite3
Replace D:\Dev\mORMot path by the absolute/relative folder where you did install the framework. In practice, a relative path (e.g. ..\..\mORMot) is preferred.
26.4.2. Setup your dedicated FPC / Lazarus environment with fpcupdeluxe
We currently use the FPC 3.2 fixes branch compiler, and the corresponding Lazarus IDE.
But since the FPC trunk may be unstable, we will propose to put in place a stable development environment based on the FPC 3.2 branch to work with your mORMot-based projects. It may ease support and debugging.
For this task, don't download an existing binary release of FPC / Lazarus, but use the fpcupdeluxe tool, as published at http://wiki.freepascal.org/fpcupdeluxe - it will allow to build your environment directly from the sources, and install it in a dedicated folder. Several FPC / Lazarus installations, with dedicated revision numbers, may coexist on the same computer: just ensure you run Lazarus from the shortcut created by fpcupdeluxe.
Unpack it in a dedicated folder, and run its executable.
On the main screen, locate on the left the two versions listboxes. Select "3.2" for FPC version and "2.1.0" for Lazarus version.
Important note: if you want to cross-compile from Windows to other systems, e.g. install a Linux cross-compiler on Windows, ensure you installed the Win32 FPC compiler and Lazarus, not the Win64 version, which is known to have troubles with currency support;
Then build the FPC and Lazarus binaries directly from the latest sources, by clicking on "Install/update FPC+Laz".
Those branches are currently used for building our production projects, so are expected to be properly tested and supported. At the time of the writing of this documentation, our Lazarus IDE (on Linux) reports using:
FPC SVN 45643 (3.2.0)
Lazarus SVN 64940 (2.1.0).
One big advantage of fpcupdeluxe is that you can very easily install cross-compilers for the CPU / OS combinations enumerated at Possible targets. Just go to the "Cross" tab, then select the target systems, and click on "Install compiler". It may be needed to download the cross-compiler binaries (once): just select "Yes" when prompted.
If you don't want to define a given version, the current trunk should/could work, if it didn't include any regression at the time you get it - this is why we provide "supported" branches. If you want to use the FPC trunk, please modify line #262 in Synopse.inc to enable the FPC_PROVIDE_ATTR_TABLE conditional and support the latest trunk RTTI changes:
{$if not defined(VER3_0) and not defined(VER3_2) and not defined(VER2)}{$define FPC_PROVIDE_ATTR_TABLE}// to be defined since SVN 42356-42411// on compilation error in SynFPCTypInfo, undefine the above conditional// see https://lists.freepascal.org/pipermail/fpc-announce/2019-July/000612.html{$ifend}
Sadly, there is no official conditional available to have this RTTI change detected. You need to define globally this conditional.
26.4.3. Missing RTTI for interfaces in old FPC 2.6
Sadly, if you use a somewhat old revision of FPC, you may have to face some long-time unresolved FPC compiler-level restriction/issue, which did not supply the needed interface RTTI, which was available since Delphi 6 - see http://bugs.freepascal.org/view.php?id=26774 As a consequence, SOA, mock/stub and MVC framework features will not work directly with older FPC revisions.
You could upgrade to a more recent FPC - we encourage you to Setup your dedicated FPC / Lazarus environment with fpcupdeluxe - or we will propose here a workaround to compile such mORMot applications with oldest FPC. The trick is to use Delphi to generate one unit containing the needed information.
Ensure that the application will use all its needed interface: for instance, run all your regression tests, and/or use all its SOA/MVC features if you are not confident about your test coverage;
Just before the application exits, add a call to ComputeFPCInterfacesUnit() with the proper folders, e.g. at the very end of your .dpr code.
For instance, here is how TestSQL3.dpr has been modified:
If you define the COMPUTEFPCINTERFACES conditional, the TestSQL3FPCInterfaces.pas unit will be generated.
Of course, for your own application, you may use absolute path names: here we used relative naming, via ..\, so that it will work on any development folder configuration.
If you want your application to compile with FPC, some little patterns should be followed.
In all your source code file, the easiest is to including the following mORMot file, which will define all compiler options and conditionals as expected:
uses{$ifdef FPC}// we may be on Kylix or upcoming Delphi for Linux{$ifdef Linux}// if you use threads
cthreads,
// widestring manager for Linux if needed !!// could also be put in another unit ... but doc states: as early as possible
cwstring, // optional{$endif}{$endif}
In fact, these above lines have been added to SynDprUses.inc, so you may just write the following:
uses{$I SynDprUses.inc}// will enable FastMM4 prior to Delphi 2006, and enable FPC on linux
As a side benefit, you will be able to share the same .dpr with Delphi, and it will enable FastMM4 for older versions which do not include it as default heap manager.
For instance a minimal FPC project to run the regression tests may be:
program LinuxSynTestFPCLinuxi386; {$I Synopse.inc}{$APPTYPE CONSOLE} uses{$I SynDprUses.inc}mORMotSelfTests; beginSQLite3ConsoleTests;
end.
In your user code, ensure you do not directly link to the Windows unit, but rely on the cross-platform classes and functions as defined in SysUtils.pas, Classes.pas and SynCommons.pas. You could find in SynFPCTypInfo.pas and SynFPCLinux.pas some low-level functions dedicated to FPC and Linux compilation, to be used with legacy units - your new code should better rely on higher level functions and classes.
If you rely on mORMot classes and types, e.g. use RawUTF8 for all your string process in the business logic, and do not use Delphi-specific features (like generics, or new syntax sugar), it will be very easy to let your application compile with FPC.
26.4.5. Linux VM installation tips
Here are a few informal notes about getting running a FPC/Lazarus virtual machine running XUbuntu, on a Windows host. They are published as a general guideline, and we will not provide any reference procedure, nor support it. As stated in Setup your dedicated FPC / Lazarus environment with fpcupdeluxe, instead of using a virtual machine, you could just install the needed cross-compilers, then generate your Linux/BSD executables from your Windows Lazarus.
Download the latest .iso version published at http://xubuntu.org/ or any other place - we use XFCE since it is a very lightweight desktop, perfect to run Lazarus, and we selected an Ubuntu LTS revision (14.04 at the time of this writing), which will be the same used on Internet servers;
Create a new virtual machine (VM) in VirtualBox, with 1 or 2 CPUs, more than 512 MB of RAM (we use 777 MB), and an automatic-growing disk storage, with a maximal size of 15 GB; ensure that the disk storage is marked as SSD if your real host storage is a SSD;
Let the CDROM storage point to the .iso you downloaded;
Start the VM and install Linux locally, as usual - you may select to download the updated packages during the installation, for safety;
When the system restarts, if it asks for software updates, accept and wait for the update installation to finish - it is a good idea to have the latest version of the kernel and libraries before installing the VirtualBox drivers;
Restart your VM when asked to;
Under a Ubuntu/Debian terminal, write the following commands:
Restart the VM, then select "Insert Guest Additions CD image" from the VM "Devices" menu: a virtual CD will be mounted on your system and appear on your desktop;
Run the following command, according to your current user name and VirtualBox version:
sudo sh /media/...user.../VBOXADDITIONS_..../VBoxLinuxAdditions.run
Restart the VM, then add a permanent shared folder in the VM configuration, named Lib, and pointing to your local mORMot installation (e.g. d:\Dev\mORMot;
Create a void folder, e.g. in your home:
mkdir lib
Create a launcher for the following command, to mount the shared folder as expected:
If you have issues during SVN retrieval, go the development/fpc folder, then run the following before trying again the fpcup_linux_x86 command:
svn cleanup
svn update
If you followed the above steps, you should now have the expected Lazarus IDE and the corresponding FPC compiler. It is amazing seeing the whole compiler + IDE being compiled from the official sources, for free, and in a few minutes.
26.5. CrossKylix support
26.5.1. What is Cross-Kylix?
The framework source code can also be cross-compiled under Delphi into a Linux executable, using CrossKylix. https://crosskylix.untergrund.net is a free toolkit to integrate the Borland Kylix (Delphi for Linux) compiler into the Delphi Windows IDE.
CrossKylix has indeed several known drawbacks:
It is a dead project, but an alive product. It still works!
You can not buy it any more. Kylix 3 was shipped with Delphi 7.
You need an actual Kylix CD (or an ISO image) to install it, since CrossKylix is just a wrapper around the official compiler, to let it run under Windows.
Visual applications (based on the CLX framework - the predecessor of FMX) may still compile, but should not be used. But for server applications, it is still a pretty viable solution.
The debugger and IDE is unusable. But thanks to our SynLog.pas, you can debug your applications, with a full stack trace in the log, in case of any exception.
We added CrossKylix support for several reasons:
We use it since years, with great success, so we know it better than FPC.
It has still a better compiler than FPC, e.g. for the RTTI we need on interfaces, or even for executable size and memory use.
Its compilation is instant - whereas FPC is long to compile.
It supports FastMM4, which performs better than the FPC memory manager, from our tests.
Resulting executables, for mORMot purpose, are faster than FPC - timing based on the regression tests.
If the code works with Delphi 7, it will certainly work with Kylix (since it shares the same compiler and RTL), whereas FPC is compatible, but not the same. In particular, it does not suffer from limited RTTI or other FPC limitations. So it sounds safer to be used on production than FPC, even today.
There is not a lot of IFDEF, but in SynCommons.pas. Then there is a SynKylix.pas unit for several functions. User code will be the same than Delphi and FPC.
There is a Linux compiler just released by Embarcadero since latest Delphi, but an Entreprise license is required, so we currently skip its support, and focus on FPC...
Once you have installed CrossKylix, and set up its search path to the same as Delphi - see Delphi Installation, you should be able to compile your project for Linux, directly from your Delphi IDE. Then you need an actual Linux system to test it - please check the Linux VM installation tips.
A minimal console application which will compile for both Delphi and CrossKylix, running all our regression tests, may be:
program Test; {$APPTYPE CONSOLE} uses
FastMM4, // optional - only for CrossKylix or Delphi < 2006mORMotSelfTests; beginSQLite3ConsoleTests;
end.
Similar guidelines as for Writing your project for FPC do apply with CrossKylix. In particular, you should never use the Windows unit in your server code, but rely on the cross-platform classes and functions as defined in SysUtils.pas, Classes.pas and SynCommons.pas.
We did not succeed to have a static SQLite3 library linked by the Kylix compiler. It compiles about the .o format - sounds like if its linker expects a gcc2 format (which is nowadays deprecated), and does not accept the gcc3 or gcc4 generated binaries. So you need to install the sqlite3 as external library on your Linux.
On a 32-bit system, it is just a one line - depending on your distribution, here Ubuntu:
sudo apt-get install sqlite3
For a 64-bit system, you need to explicitly install the x86 32-bit version of SQlite3:
sudo apt-get install sqlite3:i386
or download and install manually packages for both modes:
You could try to get the latest .deb from https://launchpad.net/ubuntu/vivid/i386/libsqlite3-0 If you want to dowwnload and install manually a .deb for x86, please install both i386 and amd64 revisions with the same exact version at once, otherwise dpkg will complain.
If it may be of any help, here are the static dependencies listed on a running 64-bit Ubuntu system, on a CrossKylix compiled executable:
As you can see, there is a very few dependencies - then same as FPC's executable in fact, with the addition of the external libsqlite3.so.0, which is statically linked to FPC's version.
26.5.2. Running Kylix 32-bit executables on 64-bit Linux
For Ubuntu versions above 13.10, if you installed a 64-bit distribution, 32-bit executables - as generated by CrossKylix - may not be recognized by the system. Of course, we recommend using FPC (cross-)compiler, and build your executable natively for the x86_64-linux target.
In order to install the 32-bit libraries needed by mORMot 32-bit executables compiled by Kylix on Linux, please execute:
If you want SynCrtSock.pas to be able to handle https:// on a 64-bit system - e.g. if you want to run the TestSQL3 regression tests which download some json reference file over https - you will need also to install libcurl (and OpenSSL) in 32-bit, as such:
sudo apt-get install libcurl3:i386
If it may be for any help, here are the static dependencies listed on a running 64-bit Ubuntu system, on a FPC 3.2 compiled executable:
There is almost no dependency: installing a mORMot server under Linux is just as simple as copying an executable on a minimal blank Linux server. You do not need any LAMP runtime, virtual machine, installing other services, or execution environment. Of course, you may better add a reverse proxy like nginx in front of your mORMot servers when connected on the Internet, but for a cloud-based solution, or a self-hosted office server, software requirements are pretty low.
26.6. Upgrading from a 1.17 revision
If you are upgrading from an older revision of the framework, your own source code should be updated.
For instance, some units where renamed, and some breaking changes introduced by enhanced features. As a consequence, a direct update is not possible.
To properly upgrade to the latest revision:
1. Erase or rename your whole previous #\mORMot directory.
2. Download latest 1.18 revision files as stated just above.
3. Change your references to mORMot units:
Add in your uses clause SynTests.pas if you use testing features;
Add in your uses clause SynLog.pas if you use logging features;
Rename in your uses clauses any SQLite3Commons reference into mORMot.pas;
Rename in your uses clauses any SQLite3 reference into mORMotSQLite3.pas;
Rename in your uses clauses any other SQlite3* reference into mORMot*;
Add in one of your uses clause a reference to the SynSQLite3Static.pas unit (for Win32 or Linux).
4. Consult the units' headers about 1.18 for breaking changes, mainly:
Changed '¤' into '~' character for mORMoti18n.pas (formerly SQlite3i18n.pas) language files.
Most of those changes will be easily identified at compile time. But a quick code review, and proper regression tests at application level is worth considering.
Feel free to get support from our forum, if needed.
27. mORMot Framework source
The mORMot Framework makes use of the following units.
Unit dependencies in the "Lib\SynDBDataset" directory
28. SynFile application
Adopt a mORMotThis sample application is a simple database tool which stores text content and files into the database, in both clear and "safe" manner. Safe records are stored using AES/SHA 256-bit encryption. There is an Audit Trail table for tracking the changes made to the database.
This document will follow the application architecture and implementation, in order to introduce the reader to some main aspects of the Framework:
We hope this part of the Software Architecture Design (SAD) document will be able to be a reliable guideline for using our framework for your own projects.
28.1. General architecture
According to the Multi-tier architecture, some units will define the three layers of the SynFile application:
Database Model
First, the database tables are defined as regular Delphi classes, like a true ORM framework. Classes are translated to database tables. Published properties of these classes are translated to table fields. No external configuration files to write - only Delphi code. Nice and easy. See FileTables.pas unit.
This unit is shared by both client and server sides, with a shared data model, i.e. a TSQLModel class instance, describing all ORM tables/classes.
It contains also internal event descriptions, and actions, which will be used to describe the software UI.
Business Logic
The server side is defined in a dedicated class, which implements an automated Audit Trail, and a service named "Event" to easily populate the Audit Trail from the Client side. See FileServer.pas unit.
The client side is defined in another class, which is able to communicate with the server, and fill/update/delete/add the database content playing with classes instances. It's also used to call the Audit Trail related service, and create the reports. See FileClient.pas unit.
Client-Server logic will be detailed in the next paragraph.
Presentation Layer
The main form of the Client is void, if you open its FileMain.dfm file. All the User Interface is created by the framework, dynamically from the database model and some constant values and enumeration types (thanks to DelphiRTTI) as defined in FileTables.pas unit (the first one, which defines also the classes/tables).
It's main method is TMainForm.ActionClick, which will handle the actions, triggered when a button is pressed.
The reports use GDI+ for anti-aliased drawing, can be zoomed and saved as pdf or text files.
The last FileEdit.pas unit is just the form used for editing the data. It also performs the encryption of "safe memo" and "safe data" records, using our SynCrypto.pas unit. It will AES-NI hardware instructions, if available, so will be very fast, even for big content.
You'll discover how the ORM plays its role here: you change the data, just like changing any class instance properties.
It also uses our SynGdiPlus.pas unit to create thumbnails of any picture (emf+jpg+tif+gif+bmp) of data inserted in the database, and add a BLOB data field containing these thumbnails.
28.2. Database design
The FileTables.pas unit is implementing all TSQLRecord child classes, able to create the database tables, using the ORM aspect of the framework - see Object-Relational Mapping (ORM). The following class hierarchy was designed:
SynFile TSQLRecord classes hierarchyMost common published properties (i.e. Name, Created, Modified, Picture, KeyWords) are taken from the TSQLFile abstract parent class. It's called "abstract", not in the current DelphiOOP terms, but as a class with no "real" database table associated. It was used to defined the properties only once, without the need of writing the private variables nor the getter/setter for children classes. Only TSQLAuditTrail won't inherit from this parent class, because it's purpose is not to contain data, but just some information.
The database itself will define TSQLAuditTrail, TSQLMemo, TSQLData, TSQLSafeMemo, and TSQLSafeData classes. They will be stored as AuditTrail, Memo, Data, SafeMemo and SafeData tables in the SQlite3 database (the table names are extract from the class name, trimming the left 'TSQL' characters).
Sounds like a regular Delphi class, doesn't it? The only fact to be noticed is that it does not inherit from a TPersistent class, but from a TSQLRecord class, which is the parent object type to be used for our ORM. The TSQLRecordSigned class type just defines some Signature and SignatureTime additional properties, which will be used here for handling digital signing of records.
Here follows the Delphi code written, and each corresponding database field layout of each registered class:
You can see that TSQLSafeMemo and TSQLSafeData are just a direct sub-class of TSQLData to create "SafeMemo" and "SafeData" tables with the exact same fields as the "Data" table. Since they were declared as class(TSQLData), they are some new class type,
Then the latest class is not inheriting from TSQLFile, because it does not contain any user data, and is used only as a log of all actions performed using SynFile:
The AssociatedRecord property was defined as TRecordReference. This special type (mapped as an INTEGER field in the database) is able to define a "one to many" relationship with ANY other record of the database model.
If you want to create a "one to many" relationship with a particular table, you should define a property with the corresponding TSQLRecord sub-type (for instance, if you want to link to a particular SafeData row, define the property as AssociatedData: TSQLSafeData;) - in this case, this will create an INTEGER field in the database, holding the RowID value of the associated record (and this field content will be filled with pointer(RowID) and not with a real TSQLSafeData instance).
Using a TRecordReference type won't link to a particular table, but any table of the database model: it will store in its associated INTEGER database field not only the RowID of the record, but also the table index as registered at TSQLModel creation. In order to access this AssociatedRecord property content, you could use either TSQLRest. Retrieve(AssociatedRecord) to get the corresponding record instance, or typecast it to RecordRef wrapper structure to easily retrieve or set the associated table and RowID. You could also use the TSQLRecord. RecordReference(Model) method in order to get the value corresponding to an existing TSQLRecord instance.
According to the MVC model - see Model-View-Controller - the framework expect a common database model to be shared between client and server. A common function has been defined in the FileTables.pas unit, as such:
We'll see later its implementation. Just note for the moment that it will register the TSQLAuditTrail, TSQLMemo, TSQLData, TSQLSafeMemo, and TSQLSafeData classes as part of the database model. The order of the registration of those classes will be used for the AssociatedRecord: TRecordReference field of TSQLAuditTrail - e.g. a TSQLMemo record will be identified with a table index of 1 in the RecordReference encoded value. So it's mandatory to NOT change this order in any future modification of the database schema, without providing any explicit database content conversion mechanism.
Note that all above graphs were created directly from our SynProject tool, which is able to create custom graphs from the application source code it parsed.
28.3. Client Server implementation
Server-side is implemented in unit FileServer, with the following class:
As stated above, it inherits from TSQLRestserverDB to define a RESTful ORM based on the SQLite3 database engine, and define a custom method-based service named Event.
The class constructor creates the whole server-side logic, following the shared data Model as defined in the FileTables unit:
You'll see that BLOB fields are handled just like other fields, even if they use their own RESTful GET/PUT dedicated URI (they are not JSON encoded, but transmitted as raw data, to save bandwidth and maintain the RESTful model). The framework handles it for you, thanks to its ORM orientation, in the TFileClient constructor:
Here, we set ForceBlobTransfert := true, since by default all BLOB fields won't be transmitted by TSQLRestClientURI, whereas our simple application expect them to be always available.
The same data Model, as defined in the FileTables unit, is used also on the client side.
On both sides, a AddAuditTrail() is defined, to allow direct logging to the internal TSQLAuditTrail table. From the client, it uses the Event method-based service to perform the remote action:
You could of course design your own User Interface without our framework. That is, this is perfectly feasible to use only the ORM part of it. For instance, it should be needed to develop AJAX applications using its RESTful model - see REST - since such a feature is not yet integrated to our provided source code.
But for producing easily applications, the framework provides a mechanism based on both ORM description and RTTI compiler-generated information in order to create most User Interface by code.
It is able to generated a Ribbon-based application, in which each table is available via a Ribbon tab, and some actions performed to it.
So the framework will need to know:
Which tables must be displayed;
Which actions should be associated with each table;
How the User Interface should be customized (e.g. hint texts, grid layout on screen, reporting etc...);
How generic automated edition, using the mORMotUIEdit.pas unit, is to be generated.
To this list could be added an integrated event feature, which can be linked to actions and custom status, to provide a centralized handling of user-level loging (as used e.g. in the SynFileTSQLAuditTrail table) - please do not make confusion between this user-level logging and technical-level logging using TSynLog and TSQLLog classes and "families" - see Enhanced logging.
28.4.1. Rendering
The current implementation of the framework User Interface generation handles two kind of rendering:
Native VCL components;
Proprietary TMS components.
You can select which set of components are used, by defining - globally to your project (i.e. in the Project/Options/Conditionals menu) - the USETMSPACK conditional. If it is not set (which is by default), it will use VCL components.
The native VCL components will use native Windows API components. So the look and feel of the application will vary depending on the Windows version it is running on. For instance, the resulting screen will be diverse if the application is run under Windows 2000, XP, Vista and Seven. The "ribbon" as generated with VCL components has most functionalities than the Office 2007/2010 ribbon, but will have a very diverse layout.
The TMS components will have the same rendering whatever the Windows it's running on, and will display a "ribbon" very close to the official Office 2007/2010 version.
The Office UI licensing program was designed by Microsoft for software developers who wish to implement the Office UI as a software component and/or incorporate the Office UI into their own applications. If you use TMS ribbon, it does not require any more acceptance of the Office UI License terms - see at http://msdn.microsoft.com/en-us/office/aa973809.aspx
Here is the screen content, using the TMS components:
User Interface generated using TMS componentsAnd here is the same application compiled using only VCL components, available from Delphi 6 up to the latest Delphi version:
User Interface generated using VCL componentsWe did not use yet the Ribbon component as was introduced in Delphi 2009. Its action-driven design won't make it easy to interface with the event-driven design of our User Interface handling, and we have to confess that this component has rather bad reputation (at least in the Delphi 2009 version). Feel free to adapt our Open Source code to use it - we'll be very pleased to release a new version supporting it, but we don't have time nor necessity to do it by ourself.
28.4.2. Enumeration types
A list of available actions should be defined, as an enumeration type:
Thanks to the DelphiRTTI, and "Un Camel Casing", the following list will generate a set of available buttons on the User Interface, named "Mark", "Unmark all", "Query", "Refresh", "Create", "Edit", "Copy", "Export", "Import", "Delete", "Sign", "Print preview", "Extract" and "Settings". Thanks to the mORMoti18n.pas unit (responsible of application i18n) and the TLanguageFile. Translate method, it could be translated on-the-fly from English into the current desired language, before display on screen or report creation.
A list of events, as used for the TSQLAuditTrail table, was also defined. Some events reflect the change made to the database rows (like feRecordModified), or generic application status (like feServerStarted):
In the grid and the reports, RTTI and "uncamelcasing" will be used to display this list as regular text, like "Record digitally signed", and translated to the current language, if necessary.
28.4.3. ORM Registration
The User Interface generation will be made by creating an array of objects inheriting from the TSQLRibbonTabParameters type.
Firstly, a custom object type is defined, associating
The Table property will map the ORM class to the User Interface ribbon tab. A custom CSV list of fields should be set to detail which database columns must be displayed on the grids and the reports, in the Select property. Each ribbon tab could contain one or more TSQLRecord table: the Group property is set to identify on which ribbon group it should be shown. The grid column widths are defined as a FieldWidth string in which each displayed field length mean is set with one char per field (A=first Select column,Z=26th column) - lowercase character will center the field data. For each table, the available actions are also set, and will be used to create the possible buttons to be shown on the ribbon toolbars (enabling or disabling a button is to be done at runtime).
Note that this array definition uses some previously defined individual constants (like DEF_SELECT, DEF_ACTIONS_DATA or GROUP_SAFE. This is a good practice, and could make code maintenance easier later on.
28.4.4. Main window
Once all this ORM and action information is available, the FileMain unit defines the following class to generate the expected ribbon-based User Interface:
Even if a real application may be truly Client-Server, we define a stand-alone mode. That is, a TFileServer instance is instantiated within the main application execution. Just un-define the DEBUGINTERNALSERVER conditional if you want a "pure client" version of the application - in this case, a stand-alone server shall be running.
All the ORM and actions defined in the FileTables unit are used to initialize the TFileRibbon content in the Ribbon field which will be the main entry point of all User Interface process.
The ActionClick() method is the main entry point of the application, and is called when the User clicks on any ribbon button. It is just a case Action of ... switch instruction, handling each TFileAction event as expected.
The Edit() method will allow edition of a given record fields, via the separated TEditForm window, as defined in FileEdit unit. We won't use the auto-generated window from RTTI in this case, since we expect a dedicated process to attach a picture to the corresponding TSQLFile item.
The ListDblClick() method will process any double click on the list to edit the corresponding item (faEdit action), or go to the record an audit trail row refers to, using a convenient local RecordRef wrapper variable:
procedureTMainForm.ListDblClick(Sender: TObject);
var P: TSQLRibbonTab;
ref: RecordRef;
begin
P := Ribbon.GetActivePage;
if P<>nilthenif P.Table=TSQLAuditTrailthenbeginif P.Retrieve(Client,P.List.Row) thenbegin
ref.Value := TSQLAuditTrail(P.CurrentRecord).AssociatedRecord;
Ribbon.GotoRecord(ref.Table(Client.Model),ref.ID);
end;
endelse
ActionClick(Sender,P.Table,ord(faEdit));
end;
The WMRefreshTimer() method will just transmit any WM_TIMER event to the ribbon process, in order to handle automatic refresh of the content, following the stateless approach of our RESTful framework:
procedureTMainForm.WMRefreshTimer(var Msg: TWMTimer);
begin
Ribbon.WMRefreshTimer(Msg);
end;
You probably noticed that Client.OnIdle is set in the FormCreate method to map the TLoginForm.OnIdleProcessForm callback. This will let the HTTP client class to use a background thread for all communication, instead of blocking the main application thread. The main User Interface will still be responsive (since OnIdleProcessForm will call Application.ProcessMessages), and change the cursor to crHourGlass in case of a slow request, or even display a temporary pop-up with "Please wait..." if the network is really slow, and the request takes more than 2 seconds (all those notification parameters can be changed in mORMotUILogin.pas).
28.5. Report generation
The following CreateReport method is overridden in FileClient.pas:
/// class used to create the User interfaceTFileRibbon = class(TSQLRibbon)
public/// overridden method used customize the report contentprocedure CreateReport(aTable: TSQLRecordClass; aID: TID; aReport: TGDIPages;
AlreadyBegan: boolean=false); override;
end;
The reporting engine in the framework is implemented via the TGDIPages class, defined in the mORMotReport.pas:
Data is drawn in memory, they displayed or printed as desired;
High-level reporting methods are available (implementing tables, columns, titles and such), but you can have access to a TCanvas property which allows any possible content generation via standard VCL methods;
Allow preview (with anti-aliased drawing via GDI+) and printing;
Direct export as .txt or .pdf file;
Handle bookmark, outlines and links inside the document.
By default, the CreateReport method of TSQLRibbon will write all editable fields value to the content.
The method is overridden by the following code:
procedureTFileRibbon.CreateReport(aTable: TSQLRecordClass; aID: TID; aReport: TGDIPages;
AlreadyBegan: boolean=false);
var Rec: TSQLFile;
Pic: TBitmap;
s: string;
PC: PChar;
P: TSQLRibbonTab;
beginwith aReport dobegin// initialize report
Clear;
BeginDoc;
Font.Size := 10;
ifnot aTable.InheritsFrom(TSQLFile) then
P := nilelse
P := GetActivePage;
if (P=nil) or (P.CurrentRecord.ID<>aID) or (P.Table<>aTable) thenbegininherited; // default handler
exit;
end;
Rec := TSQLFile(P.CurrentRecord);
Caption := U2S(Rec.fName);
The report is cleared, and BeginDoc method is called to start creating the internal canvas and band positioning. The font size is set, and parameters are checked against expected values. Then the current viewed record is retrieved from GetActivePage. CurentRecord, and the report caption is set via the record Name field.
AddPagesToFooterAt to add the current page number at a given position (here the left margin);
AddTextToFooterAt to add some custom text at a given position (here the right margin, after having changed the text alignment into right-aligned).
Note that SaveLayout/RestoreSavedLayout methods are used to modify temporary the current font and paragraph settings for printing the footer, then restore the default settings.
// write global header at the beginning of the report
DrawTitle(P.Table.CaptionName+' : '+Caption,true);
NewHalfLine;
AddColumns([6,40]);
SetColumnBold(0);
if Rec.SignatureTime<>0 thenbegin
PC := Pointer(Format(sSignedN,[Rec.SignedBy,Iso2S(Rec.SignatureTime)]));
DrawTextAcrossColsFromCSV(PC,$C0C0FF);
end;
if Rec.fCreated<>0 then
DrawTextAcrossCols([sCreated,Iso2S(Rec.fCreated)]);
if Rec.fModified<>0 then
DrawTextAcrossCols([sModified,Iso2S(Rec.fModified)]);
if Rec.fKeyWords='' then
s := sNone elsebegin
s := U2S(Rec.fKeyWords);
ExportPDFKeywords := s;
end;
DrawTextAcrossCols([sKeyWords,s]);
NewLine;
Pic := LoadFromRawByteString(Rec.fPicture);
if Pic<>nilthentry
DrawBMP(Pic,0,Pic.Width div 3);
finally
Pic.Free;
end;
Report header is written using the following methods:
DrawTitle to add a title to the report, with a black line below it (second parameter to true) - this title will be added to the report global outline, and will be exported as such in .pdf on request;
NewHalfLine and NewLine will leave some vertical gap between two paragraphs;
AddColumns, with parameters set as percentages, will initialize a table with the first column content defined as bold (SetColumnBold(0));
DrawTextAcrossCols and DrawTextAcrossColsFromCSV will fill a table row according to the text specified, one string per column;
DrawBMP will draw a bitmap to the report, which content is loaded using the generic LoadFromRawByteString function implemented in SynGdiPlus.pas;
U2S and Iso2S function, as defined in mORMoti18n.pas, are used for conversion of some text or TTimeLog/TUnixTime into a text formated with the current language settings (i18n).
// write report content
DrawTitle(sContent,true);
SaveLayout;
Font.Name := 'Courier New';
if Rec.InheritsFrom(TSQLSafeMemo) then
DrawText(sSafeMemoContent) elseif Rec.InheritsFrom(TSQLMemo) then
DrawTextU(TSQLMemo(Rec).Content) elseif Rec.InheritsFrom(TSQLData) thenwithTSQLData(Rec) dobegin
DrawTextU(Rec.fName);
s := PictureName(TSynPicture.IsPicture(TFileName(Rec.fName)));
if s<>'' then
s := format(sPictureN,[s]) elseifnot Rec.InheritsFrom(TSQLSafeData) then
s := U2S(GetMimeContentType(Pointer(Data),Length(Data),TFileName(Rec.fName)));
if s<>'' then
DrawTextFmt(sContentTypeN,[s]);
DrawTextFmt(sSizeN,[U2S(KB(Length(Data)))]);
NewHalfLine;
DrawText(sDataContent);
end;
RestoreSavedLayout;
Then the report content is appended, according to the record class type:
DrawText, DrawTextU and DrawTextFmt are able to add a paragraph of text to the report, with the current alignment - in this case, the font is set to 'Courier New' so that it will be displayed with fixed width;
GetMimeContentType is used to retrieve the exact type of the data stored in this record.
Those ExportPDFApplication and ExportPDFForceJPEGCompression properties (together with the ExportPDFKeywords are able to customize how the report will be exported into a .pdf file. In our case, we want to notify that SynFile generated those files, and that the header bitmap should be compressed as JPEG before writing to the file (in order to produce a small sized .pdf).
You perhaps did notice that textual constant were defined as resourcestring, as such:
resourcestring
sCreated = 'Created';
sModified = 'Modified';
sKeyWords = 'KeyWords';
sContent = 'Content';
sNone = 'None';
sPageN = 'Page %d / %d';
sSizeN = 'Size: %s';
sContentTypeN = 'Content Type: %s';
sSafeMemoContent = 'This memo is password protected.'#13+
'Please click on the "Edit" button to show its content.';
sDataContent = 'Please click on the "Extract" button to get its content.';
sSignedN = 'Signed,By %s on %s';
sPictureN = '%s Picture';
The mORMoti18n.pas unit is able to parse all those resourcestring from a running executable, via its ExtractAllResources function, and create a reference text file to be translated into any handled language.
Creating a report from code does make sense in an ORM. Since we have most useful data at hand as Delphi classes, code can be shared among all kind of reports, and a few lines of code is able to produce complex reports, with enhanced rendering, unified layout, direct internationalization and export capabilities.
Note that the mORMotReport.pas unit uses UTF-16 encoded string, i.e. our SynUnicode type, which is either UnicodeString since Delphi 2009, or WideString for older versions. WideString is known to have performance issues, due to use of slow BSTR API calls - so if you want to create huge reports with pre-Unicode versions of Delphi and our report engine, consider adding a reference to our SynFastWideString.pas unit at first place of your .dpr uses clause, for potential huge speed enhancement. See Unicode and UTF-8 for more details, especially the restriction of use, since it will break any attempt to use BSTR parameters with any OLE/COM object.
28.6. Application i18n and L10n
In computing, internationalization and localization (also spelled internationalisation and localisation) are means of adapting computer software to different languages, regional differences and technical requirements of a target market:
Internationalization (i18n) is the process of designing a software application so that it can be adapted to various languages;
Localization (L10n) is the process of adapting internationalized software for a specific region or language by adding locale-specific components and translating text, e.g. for dates display.
Our framework handles both features, via the mORMoti18n.pas unit. We just saw above how resourcestring defined in the source code are retrieved from the executable and can be translated on the fly. The unit extends this to visual forms, and even captions generated from RTTI - see RTTI.
The unit expects all textual content (both resourcestring and RTTI derived captions) to be correct English text. A list of all used textual elements will be retrieved then hashed into an unique numerical value. When a specific locale is set for the application, the unit will search for a .msg text file in the executable folder matching the expected locale definition. For instance, it will search for FR.msg for translation into French.
In order to translate all the user interface, a corresponding .msg file is to be supplied in the executable folder. Neither the source code, nor the executable is to be rebuild to add a new language. And since this file is indeed a plain textual file, even a non developer (e.g. an end-user) is able to add a new language, starting from another .msg.
28.6.1. Creating the reference file
In order to begin a translation task, the mORMoti18n.pas unit is able to extract all textual resource from the executable, and create a reference text file, containing all English sentences and words to be translated, associated with their numerical hash value.
It will in fact:
Extract all resourcestring text;
Extract all captions generated from RTTI (e.g. from enumerations or class properties names);
Extract all embedded dfm resources, and create per-form sections, allowing a custom translation of displayed captions or hints.
This creation step needs a compilation of the executable with the EXTRACTALLRESOURCES conditional defined, globally to the whole application (a full rebuild is necessary after having added or suppressed this conditional from the Project / Options / Folders-Conditionals IDE field).
Then the ExtractAllResources global procedure is to be called somewhere in the code.
For instance, here is how this is implemented in FileMain.pas, for the framework main demo:
procedureTMainForm.FormShow(Sender: TObject);
begin{$ifdef EXTRACTALLRESOURCES}ExtractAllResources(// first, all enumerations to be translated[TypeInfo(TFileEvent),TypeInfo(TFileAction),TypeInfo(TPreviewAction)],// then some class instances (including the TSQLModel will handle all TSQLRecord)[Client.Model],// some custom classes or captions
[],[]);
Close;{$else}//i18nLanguageToRegistry(lngFrench);{$endif}
Ribbon.ToolBar.ActivePageIndex := 1;
end;
A global EXTRACTALLRESOURCES conditional can be defined temporarly for the project: from the Delphi IDE, Project/Options then enabling the conditional, Project/Run to create the .messages file as expected, and finally Project/Options to undefine the EXTRACTALLRESOURCES conditional and rebuild a regular executable.
The TFileEvent and TFileAction enumerations RTTI information is supplied, together with the current TSQLModel instance. All TSQLRecord classes (and therefore properties) will be scanned, and all needed English caption text will be extracted.
The Close method is then called, since we don't want to use the application itself, but only extract all resources from the executable.
Running once the executable will create a SynFile.messages text file in the SynFile.exe folder, containing all English text:
[TEditForm]
Name.EditLabel.Caption=_2817614158 Name
KeyWords.EditLabel.Caption=_3731019706 KeyWords
[TLoginForm]
Label1.Caption=_1741937413 &User name:
Label2.Caption=_4235002365 &Password:
[TMainForm]
Caption=_16479868 Synopse mORMot demo - SynFile
[Messages]
2784453965=Memo
2751226180=Data
744738530=Safe memo
895337940=Safe data
2817614158=Name
1741937413=&User name:
4235002365=&Password:
16479868= Synopse mORMot demo - SynFile
940170664=Content
3153227598=None
3708724895=Page %d / %d
2767358349=Size: %s
4281038646=Content Type: %s
2584741026=This memo is password protected.|Please click on the "Edit" button to show its content.
3011148197=Please click on the "Extract" button to get its content.
388288630=Signed,By %s on %s
(...)
The main section of this text file is named [Messages]. In fact, it contains all English extracted texts, as NumericalKey=EnglishText pairs. Note this will reflect the exact content of resourcestring or RTTI captions, including formating characters (like %d), and replacing line feeds (#13) by the special | character (a line feed is not expected on a one-line-per-pair file layout). Some other text lines are separated by a comma. This is usual for instance for hint values, as expected by the code.
As requested, each application form has its own section (e.g. [TEditForm], [TMainForm]), proposing some default translation, specified by a numerical key (for instance Label1.Caption will use the text identified by 1741937413 in the [Messages] section). The underline character before the numerical key is used to refers to this value. Note that if no _NumericalKey is specified, a plain text can be specified, in order to reflect a specific use of the generic text on the screen.
28.6.2. Adding a new language
In order to translate the whole application into French, the following FR.msg file could be made available in the SynFile.exe folder:
[Messages]
2784453965=Texte
2751226180=Données
744738530=Texte sécurisé
895337940=Données sécurisées
2817614158=Nom
1741937413=&Nom utilisateur:
4235002365=&Mot de passe:
16479868= Synopse mORMot Framework demo - SynFile
940170664=Contenu
3153227598=Vide
3708724895=Page %d / %d
2767358349=Taille: %s
4281038646=Type de contenu: %s
84741026=Le contenu de ce memo est protégé par un mot de passe.|Choisissez "Editer" pour le visualiser.
3011148197=Choisissez "Extraire" pour enregistrer le contenu.
388288630=Signé,Par %s le %s
(....)
Since no form-level custom captions (e.g. [TLoginForm]) have been defined in this FR.msg file, the default numerical values will be used. In our case, Name.EditLabel.Caption will be displayed using the text specified by 2817614158, i.e. 'Nom'. You can specify a custom translation for a given field on any form: sometimes, the text should be adapted with a given context.
Note that the special characters %s %d , | markup was preserved: only the plain English text has been translated to the corresponding French.
28.6.3. Language selection
User Interface language can be specified at execution.
There are two ways to change the application language:
Manual translation of every form;
Hook of the common TForm / TFrame classes, for automatic translation.
In manual translation mode:
You can change languages on the fly, i.e. no need to restart the application;
But you must modify your code to explicitly translate the forms after their creation;
And you won't be able to translate dialogs without sources (e.g. third-party dialogs).
TForm/TFrame hook, on its side, has the following behavior:
You do not need to modify your code, since it will be global to the application;
It will work also for any third-party dialog, even if you do not have the source of it;
But you can't change the language on the fly: you need to restart the application.
28.6.4. Manual translation
Once for the application, you should call SetCurrentLanguage() to set the global Language object and all related Delphi locale settings.
The, in each OnShow event of any form, you should call FormTranslateOne() e.g.
Note that a list of already translated forms is maintained by the unit, when you call FormTranslate(). Therefore:
All specified forms will be translated again by any further SetCurrentLanguage() call;
But none of these forms must be freed after a FormTranslate([]) call - use FormTranslateOne() instead to translate a given form once, e.g. for all temporary created forms.
28.6.5. TForm / TFrame hook
If the USEFORMCREATEHOOK conditional is defined, the mORMoti18n.pas unit will hook TCustomForm.OnCreate method to translate all its nested components. It will also intercept TCustomFrame.Create() to allow automatic translation of its content.
Since the language must be known at program startup, before any TForm is actually created, the language will be set in the Operating System registry. The HKEY_CURRENT_USER\Software\[CompanyName]i18n\ key should contain one value per application (i.e. the lowercase .exe file name without its path), which will identify the abbreviation of the expected language. If there is no entry in this registration key for the given application, the current Windows local will be used.
For instance, if you define USEFORMCREATEHOOK conditional for your project, and run at least e.g. once in FileMain.pas, for the framework main demo:
.. then it will set the main application language as French. At next startup, the unit will search for a FR.msg file, which will be used to translate all screen layout, including all RTTI-generated captions.
Of course, for a final application, you'll need to change the language by a common setting. See i18nAddLanguageItems, i18nAddLanguageMenu and i18nAddLanguageCombo functions and procedures to create your own language selection dialog, using a menu or a combo box, for instance.
28.6.6. Localization
Take a look at the TLanguageFile class. After the main language has been set, you can use the global Language instance in order to localize your application layout.
The mORMoti18n unit will register itself to some methods of mORMot.pas, in order to translate the RTTI-level text into the current selected language. See for instance i18nDateText.
29. Main SynFile Demo source
The Main SynFile Demo makes use of the following units.
Client-Server HTTP/1.1 over TCP/IP protocol communication shall be made available by some dedicated classes, and ready to be accessed from outside any Delphi Client (e.g. the implement should be AJAX ready)
A Database Grid shall be made available to provide data browsing in the Client Application - it shall handle easy browsing, by column resizing and sorting, on the fly customization of the cell content
Internationalization (i18n) of the whole User Interface shall be made available by defined some external text files: Delphi resourcestring shall be translatable on the fly, custom window dialogs automaticaly translated before their display, and User Interface generated from RTTI should be included in this i18n mechanism
A reporting feature, with full preview and export as PDF or TXT files, shall be integrated
30.1. Client Server ORM/SOA framework
30.1.1. SWRS # DI-2.1.1 The framework shall be Client-Server oriented
Design Input 2.1.1 (Initial release): The framework shall be Client-Server oriented.
ClientServer model of computing is a distributed application structure that partitions tasks or workloads between service providers, called servers, and service requesters, called clients.
Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server machine is a host that is running one or more server programs which share its resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await (listen for) incoming requests.
The Synopse mORMot Framework shall implement such a Client-Server model by a set of dedicated classes, over various communication protocols, but in an unified way. Application shall easily change the protocol used, just by adjusting the class type used in the client code. By design, the only requirement is that protocols and associated parameters are expected to match between the Client and the Server.
This specification is implemented by the following units:
30.1.2. SWRS # DI-2.1.1.1 A RESTful mechanism shall be implemented
Design Input 2.1.1.1 (Initial release): A RESTful mechanism shall be implemented.
REST-style architectures consist of clients and servers, as was stated in SWRS # DI-2.1.1. Clients initiate requests to servers; servers process requests and return appropriate responses. Requests and responses are built around the transfer of "representations" of "resources". A resource can be essentially any coherent and meaningful concept that may be addressed. A representation of a resource is typically a document that captures the current or intended state of a resource.
In the Synopse mORMot Framework, so called "resources" are individual records of the underlying database, or list of individual fields values extracted from these databases, by a SQL-like query statement.
This specification is implemented by the following units:
30.1.3. SWRS # DI-2.1.1.2.1 Client-Server Direct communication shall be available inside the same process
Design Input 2.1.1.2 (Initial release): Commmunication should be available directly in the same process memory, or remotly using Named Pipes, Windows messages or HTTP/1.1 protocols.
Client-Server Direct communication shall be available inside the same process.
This specification is implemented by the following units:
30.1.6. SWRS # DI-2.1.1.2.4 Client-Server HTTP/1.1 over TCP/IP protocol communication shall be made available by some dedicated classes, and ready to be accessed from outside any Delphi Client (e.g. the implement should be AJAX ready)
Client-Server HTTP/1.1 over TCP/IP protocol communication shall be made available by some dedicated classes, and ready to be accessed from outside any Delphi Client (e.g. the implement should be AJAX ready).
This specification is implemented by the following units:
30.1.7. SWRS # DI-2.1.2 UTF-8 JSON format shall be used to communicate
Design Input 2.1.2 (Initial release): UTF-8 JSON format shall be used to communicate.
JSON, as defined in the Software Architecture Design (SAD) document, is used in the Synopse mORMot Framework for all Client-Server communication. JSON (an acronym for JavaScript Object Notation) is a lightweight text-based open standard designed for human-readable data interchange. Despite its relationship to JavaScript, it is language-independent, with parsers available for virtually every programming language.
JSON shall be used in the framework for returning individual database record content, in a disposition which could make it compatible with direct JavaScript interpretation (i.e. easily creating JavaScript object from JSON content, in order to facilitate AJAX application development). From the Client to the Server, record content is also JSON-encoded, in order to be easily interpreted by the Server, which will convert the supplied field values into proper SQL content, ready to be inserted to the underlying database.
JSON should be used also within the transmission of request rows of data. It therefore provide an easy way of data formating between the Client and the Server.
The Synopse mORMot Framework shall use UTF-8 encoding for the character transmission inside its JSON content. UTF-8 (8-bit Unicode Transformation Format) is a variable-length character encoding for Unicode. UTF-8 encodes each character (code point) in 1 to 4 octets (8-bit bytes). The first 128 characters of the Unicode character set (which correspond directly to the ASCII) use a single octet with the same binary value as in ASCII. Therefore, UTF-8 can encode any Unicode character, avoiding the need to figure out and set a "code page" or otherwise indicate what character set is in use, and allowing output in multiple languages at the same time. For many languages there has been more than one single-byte encoding in usage, so even knowing the language was insufficient information to display it correctly.
This specification is implemented by the following units:
30.1.8. SWRS # DI-2.1.3 The framework shall use an innovative ORM (Object-relational mapping) approach, based on classes RTTI (Runtime Type Information)
Design Input 2.1.3 (Initial release): The framework shall use an innovative ORM (Object-relational mapping) approach, based on classes RTTI (Runtime Type Information).
ORM, as defined in the Software Architecture Design (SAD) document, is used in the Synopse mORMot Framework for accessing data record fields directly from Delphi Code.
Object-relational mapping (ORM, O/RM, and O/R mapping) is a programming technique for converting data between incompatible type systems in relational databases and object-oriented programming languages. This creates, in effect, a "virtual object database" that can be used from within the Delphi programming language.
The published properties of classes inheriting from a new generic type named TSQLRecord are used to define the field properties of the data. Accessing database records (for reading or update) shall be made by using these classes properties, and some dedicated Client-side methods.
This specification is implemented by the following units:
30.1.9. SWRS # DI-2.1.4 The framework shall provide some Cross-Cutting components
Design Input 2.1.4 (Initial release): The framework shall provide some Cross-Cutting components.
Cross-Cutting infrastructure layers shall be made available for handling data filtering and validation, security, session, cache, logging and testing (framework uses test-driven approach and features stubbing and mocking).
All crosscutting scenarios are coupled, so you benefit of consisting APIs and documentation, a lot of code-reuse, JSON/RESTful orientation from the ground up.
This specification is implemented by the following units:
30.1.10. SWRS # DI-2.1.5 The framework shall offer a complete SOA process
Design Input 2.1.5 (Initial release): The framework shall offer a complete SOA process.
In order to follow a Service Oriented Architecture design, your application's business logic can be implemented in several ways using mORMot:
Via some TSQLRecord inherited classes, inserted into the database model, and accessible via some RESTful URI - this is implemented by our ORM architecture - see SWRS # DI-2.1.3;
By some RESTful services, implemented in the Server as published methods, and consumed in the Client via native Delphi methods;
Defining some RESTful service contracts as standard Delphiinterface, and then run it seamlesly on both client and client sides.
This specification is implemented by the following units:
30.2.1. SWRS # DI-2.2.1 The SQLite3 engine shall be embedded to the framework
Design Input 2.2.1 (Initial release): The SQLite3 engine shall be embedded to the framework.
The SQLite3 database engine is used in the Synopse mORMot Framework as its kernel database engine. SQLite3 is an ACID-compliant embedded relational database management system contained in a C programming library.
This library shall be linked statically to the Synopse mORMot Framework, or using official external sqlite3.dll distribution, and interact directly from the Delphi application process.
The Synopse mORMot Framework shall enhance the standard SQLite3 database engine by introducing some new features stated in the Software Architecture Design (SAD) document, related to the Client-Server purpose or the framework - see SWRS # DI-2.1.1.
This specification is implemented by the following units:
30.2.2. SWRS # DI-2.2.2 The framework libraries, including all its SQLite3 related features, shall be tested using Unitary testing
Design Input 2.2.2 (Initial release): The framework libraries, including all its SQLite3 related features, shall be tested using Unitary testing.
The Synopse mORMot Framework shall use all integrated Unitary testing features provided by a common testing framework integrated to all Synopse products. This testing shall be defined by classes, in which individual published methods define the actual testing of most framework features.
All testing shall be run at once, for example before any software release, or after any modification to the framework code, in order to avoid most regression bug.
This specification is implemented by the following units:
30.3.1. SWRS # DI-2.3.1.1 A Database Grid shall be made available to provide data browsing in the Client Application - it shall handle easy browsing, by column resizing and sorting, on the fly customization of the cell content
Design Input 2.3.1 (Initial release): An User Interface, with buttons and toolbars shall be easily being created from the code, with no RAD needed, using RTTI and data auto-description.
A Database Grid shall be made available to provide data browsing in the Client Application - it shall handle easy browsing, by column resizing and sorting, on the fly customization of the cell content.
This specification is implemented by the following units:
30.3.3. SWRS # DI-2.3.1.3 Internationalization (i18n) of the whole User Interface shall be made available by defined some external text files: Delphi resourcestring shall be translatable on the fly, custom window dialogs automaticaly translated before their display, and User Interface generated from RTTI should be included in this i18n mechanism
Internationalization (i18n) of the whole User Interface shall be made available by defined some external text files: Delphi resourcestring shall be translatable on the fly, custom window dialogs automaticaly translated before their display, and User Interface generated from RTTI should be included in this i18n mechanism.
This specification is implemented by the following units:
30.3.4. SWRS # DI-2.3.2 A reporting feature, with full preview and export as PDF or TXT files, shall be integrated
Design Input 2.3.2 (Initial release): A reporting feature, with full preview and export as PDF or TXT files, shall be integrated.
The Synopse mORMot Framework shall provide a reporting feature, which could be used stand-alone, or linked to its database mechanism. Reports shall not be created using a RAD approach (e.g. defining bands and fields with the mouse on the IDE), but shall be defined from code, by using some dedicated methods, adding text, tables or pictures to the report. Therefore, any kind of report shall be generated.
This reports shall be previewed on screen, and exported as PDF or TXT on request.
This specification is implemented by the following units: